venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Inhibition-augmented ConvNets
Abstract
Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs. The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.
1 INTRODUCTION
After AlexNet was proposed, several other more sophisticated Convolutional Networks (ConvNets), e.g. VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), ResNet (He et al., 2015), DenseNet (Huang et al., 2017), consecutively achieved lower error rates on classification benchmarks such as CIFAR (Krizhevsky, 2009), SVHN (Netzer et al., 2011) and ImageNet (Krizhevsky et al., 2012). The witnessed reduction of the classification error, however, was not accompanied by a similar increase of generalization and robustness to common corruptions or perturbations in the test images. When test images contain artefacts that are not present in the training data (e.g. shot noise, JPEG compression, blur), the performance of SOTA networks, like ResNet or DenseNet, degrade relatively more than that of AlexNet (Hendrycks & Dietterich, 2019).
Robustness of DeepNets and ConvNets is gaining attention from researchers in machine learning and computer vision. One point of analysis concerns adversarial attacks that consist of very small, visually imperceptible perturbations of the input signals such that confuse the classification model (Akhtar & Mian, 2018). Several adversarial attacks were proposed based on the knowledge of the classification models (Goodfellow et al., 2015) or on iterative methods (Kurakin et al., 2016; Madry et al., 2018). Class-specific and universal black box attacks (Moosavi-Dezfooli et al., 2017) were also proposed. As new types of adversarial perturbations are designed, defense algorithms need to be developed (Lu et al., 2017; Metzen et al., 2017).
In contrast to adversarial perturbations, common corruptions and perturbations are modifications of the input signals due to typical artefacts generated by sensor noise, illumination changes, transformations determined by changes of camera perspective (e.g. rotations or elastic transformations), quality loss due to compression, among others. These can be considered as average cases of ad-
versarial attacks and are very common in computer vision tasks. In this study, we focus on the robustness of ConvNets to common corruptions and perturbations and demonstrate how the use of inhibition-augmented filters increases the robustness of existing ConvNet models. We use a layer that implements a local-response suppression mechanism (Strisciuglio et al., 2020), named pushpull, to replace some convolutional layers in existing ConvNets. The design of this layer was inspired by neurophysiological evidence of the push-pull inhibition exhibited by some neurons in the early part of the visual system of the brain (Hirsch et al., 1998). Lauritzen & Miller (2003) stated that ‘push-pull inhibition [...] acts to sharpen spatial frequency tuning, [...] and increase the stability of cortical activity’. These neurons have inhibitory interneurons with receptive fields of opposite polarity. The interneurons suppress the responses of the associated neurons in noisy local patterns; that is they pull the push responses. From an engineering point of view, the push-pull inhibition can be considered as a band-pass filter. A computational model of the push-pull inhibition was introduced in image processing operators for contour and line (Azzopardi et al., 2014; Strisciuglio et al., 2019) detection that are robust to various types of noise. Early results on augmenting ConvNets with push-pull inhibition were reported in (Strisciuglio et al., 2020), where only the first convolutional layer of existing networks was replaced and preliminary experiments on the MNIST and CIFAR images were reported.
We deploy a push-pull layer in the state-of-the-art (wide-)ResNet and DenseNet architectures by replacing the convolutional layer in the entry layer of the networks and all convolutional layers inside the first residual or dense block. The number of layers and parameters of the modified networks remains the same of their original counterpart. We train several models with and without the push-pull layers, using the images from ImageNet and CIFAR training sets, and test them on the ImageNetC/P and CIFAR-10-C/P benchmark test sets (Hendrycks & Dietterich, 2019), which contain images with several types of corruption and perturbation. Furthermore, we combine the push-pull layer with other techniques to improve the robustness performance of ConvNets, namely the data augmentation techniques AutoAugment (Cubuk et al., 2018) and cutout (Devries & Taylor, 2017), and low-pass anti-aliasing filters (Zhang, 2019), of which we provide more details in Section 2. We show how combined data- and architecture-related actions have jointly a positive effect to further improve the robustness and generalization performance of existing models.
Our contributions are twofold: a) new residual and dense layers in which push-pull filters replace the original convolutions, b) a thorough analysis of the impact of the push-pull inhibition on the robustness of existing models, and of its combination with other strategies for model robustness.
The rest of the paper is organized as follows. We discuss related works in Section 2 and provide details of the push-pull layer and modified residual and dense layers in Section 3. We report the experiments and results in Section 4 and Section 5. Finally, we draw conclusions in Section 6.
2 RELATED WORKS
One common approach to increase the generalization of existing models is data augmentation. It provides a mechanism to increase the robustness to certain transformations included in the training process. One type of data augmentation, namely added Gaussian noise, does not guarantee robustness to other types of noise, e.g. shot noise (Zheng et al., 2016). In Vasiljevic et al. (2016), for instance, it was demonstrated that using one blurring type for data augmentation during training does not result in a model that is robust to other types of blurring. Moreover, in Geirhos et al. (2018), it was observed that heavy data augmentation can cause underfitting. Devries & Taylor (2017) proposed a technique named cutout that masks out random square patches of the training images and increases network performance. Although cutout reduces the classification error on benchmark test sets, we noticed that it does not improve robustness to common corruptions and perturbations. Lopes et al. (2019) proposed PatchGaussian, which combines the regularization abilities of cutout with data augmentation using Gaussian noise. Randomly located patches are selected from the input images and corrupted with Gaussian noise. Autoaugment (Cubuk et al., 2018; Lim et al., 2019) is a very effective strategy to learn an optimal set of augmentation policies from training data. Together with increased classification rate on benchmark test sets, it was shown to improve the robustness of the concerned models to various corruptions and perturbations (Yin et al., 2019). Recently, AugMix was proposed. It uses diverse augmentations and a Jensen-Shannon divergence consistency loss function to train models that have better generalization and uncertainty estimates (Hendrycks et al., 2020).
Data augmentation is in some cases very powerful. However, these techniques work around architectural or structural weaknesses of ConvNets. For example, using high-frequency noise for data augmentation (e.g. Gaussian noise) makes the network focuses on low frequency features that are discriminant instead of becoming explicitly robust to high-frequency corruptions (Yin et al., 2019). In order to improve the intrinsic robustness of ConvNets, it would be preferable to deploy architectural elements that are able to better detect the features of interest also when the input data is corrupted. For instance, common downsampling methods, like max-pooling and strided convolutions, introduce aliasing as they ignore the sampling theorem of Nyquist. Zhang (2019) used a low-pass filter before down-sampling and showed that it improves classification results on ImageNet and robustness to common corruptions of the input images. A layer of Gabor filters was recently shown to increase network robustness to adversarial attacks (Pérez et al., 2020).
3 PUSH-PULL INHIBITION IN CONVNETS
A ConvNet constitutes of L layers, each of which performs a non-linear transformation H(x) of the input x. In some architectures, H(x) is a composite function of convolutions, batch normalization (BN), rectifier linear unit (ReLU) and pooling. For example, a residual layer contains two convolutions, batch normalizations, ReLUs and one identity connection. We use a composite layer, called push-pull, that consists of two convolutions with kernels of different size and two ReLU functions.
3.1 PUSH-PULL LAYER
The response of the push-pull layer (Figure 1) is defined as the linear combination of the rectified responses of two convolutions of the input signal, one with an excitatory (push) kernel k(·) and one with an inhibitory (pull) kernel −k̂(·) (Strisciuglio et al., 2020). The responses of the push and pull components are rectified by means of the ReLU function, which introduces non-linearity, and combined: the pull response is subtracted from (i.e. it suppresses) the push response. In the forward step of the learning algorithm, the weights of each pull kernel are computed by upsampling and negating the weights of the corresponding push kernel. Since the pull kernels are directly inferred from the push kernels, the number of trainable parameters does not change. In the backward step, however, the implementation of the push-pull layer ensures that the gradient is back-
propagated to both push and pull components. While only the weights of the push kernels are updated, the effect of the gradient back-propagation incorporates the pull responses. We compute the response R of a push-pull layer to an input x as:
R(x) = Θ (x ? k)− αΘ ( x ? (−k̂h↑) ) (1)
where Θ(·) is the ReLU function, k is the push kernel and k̂h↑ is its upsampled (by a factor h) and negated version, i.e. the pull kernel. We set h = 2, that is the pull kernel has double width and height of the push kernel. The parameter α is a weighting factor for the inhibition response, which we set to 1 (i.e. same relevance as the push response). The decision of having pull kernels larger than the push kernels is motivated by neurophysiological findings (Li et al., 2012).
The combined effect of the push and pull kernels results in an increased robustness to the detection of features of interest in corrupted input signals. Figure 2 illustrates an example of the response maps of a conventional convolution filter and those of a push-pull filter on images corrupted with noise. The response maps of the push-pull filter on corrupted images correlate better with that of the clean image in comparison to the corresponding response maps of the convolution filter. For this illustration we selected a convolutional and a push kernel that have the same shape and whose parameters are learned in the first layer of a ResNet model, without and with push-pull layers
respectively, on the CIFAR data set. This allows to better and more directly demonstrate the effect that the pull inhibitory component has on the quality of the computed feature maps. The spatial frequency selectivity of the push-pull filter is sharper than that of a conventional convolutional layer, as it focuses on the frequency of interest and suppresses those outside a certain learned sub-band. In this way, noise induced artefacts can be filtered out and clearer feature maps are computed for further processing in subsequent layers of the network.
3.2 PUSH-PULL IN RESIDUAL AND DENSE LAYERS
We extended the implementation of the residual and dense composite layers, originally proposed by He et al. (2015) and Huang et al. (2017), respectively. In both cases, we substituted the convolutional layers with push-pull layers, which resulted in the push-pull residual layer and the push-pull dense layer.
The push-pull inhibition is a phenomenon that is exhibited by certain neurons in the early visual system. Evidence of it has been observed in area V1 of the visual cortex (Lauritzen & Miller, 2003), which can be compared to the early layers of a ConvNet. Thus, motivated by neuro-scientific findings, we deploy the modified push-pull residual and dense layers in the first block of the considered architectures. In principle, the push-pull layer may be used to substitute a convolutional layer at any depth in a ConvNet. In practice, however, we experienced that deploying push-pull layers at deeper layers considerably changes the training dynamics of the model, not contributing to result improvement, and the nominal hyperparameters used to train the original network need to be changed.
4 EXPERIMENTS
In the following, we refer as ResNet-L to a residual network with L layers (He et al., 2015) and as DenseNet-L-k to a densely connected network with L layers and growing factor k (Huang et al., 2017). The notation ‘-pp’ in the model name indicates the use of the push-pull layer that replaces the first convolutional layer. We use the suffix ‘-b1’ to denote that the concerned model deploys push-pull residual or dense layers in the first block of the network as well.
4.1 DATA SETS
We trained several models on the CIFAR and ImageNet data sets, and tested them on the original test sets and on those provided with the CIFAR-C/P and ImageNet-C/P benchmarks (Hendrycks & Dietterich, 2019). The CIFAR-C and ImageNet-C data sets are composed of 75 test sets of corrupted images, generated by applying 15 types of corruption with 5 levels of severity, to the original images. The corruptions are arranged in four categories: noise (Gaussian, shot and impulse noise), blur (defocus, glass, motion and zoom blur), weather (snow, frost, fog, brightness) and digital distortions (contrast, elastic transformation, pixelate, jpeg compression). The perturbations in CIFAR-P and ImageNet-P have a character of temporality. Starting from the corrupted images in CIFAR-C and ImageNet-C, sequences of 30 frames were generated by applying consecutive small perturbations. For each image in the original CIFAR and ImageNet validation sets, ten types of perturbed sequences
were created (Gaussian and shot noise, motion and zoom blur, snow, brightness, translate, rotate, tilt, scale). More details about the -C and -P data sets are reported by Hendrycks & Dietterich (2019).
4.2 EVALUATION
We applied the evaluation protocol proposed by Hendrycks & Dietterich (2019). Besides the classification error on the CIFAR and ImageNet, we evaluate the corruption error C, i.e. the classification error achieved on the CIFAR-C and ImageNet-C sets, and the flip-probability FP , that is the probability that the outcome of the classification changes between consecutive frames, on the sequences in the CIFAR-P and ImageNet-P sets. We compute the mean and relative corruption errors mCE and rCE and the normalized flip probability, i.e. the flip rate FR. For details about these metrics, we refer the reader to Appendix B and to the paper of Hendrycks & Dietterich (2019).
4.3 EXPERIMENTS
CIFAR. We trained several ResNet and DenseNet models and their versions with the push-pull layer, using the images from the CIFAR training set to which we applied a light data augmentation, consisting only of center-cropping and random horizontal flipping (Lee et al., 2015). For both (wide)ResNet- and DenseNet-based models we optimized the parameters with stochastic gradient descent (SGD), Nesterov momentum equals to 0.9 and weight decay equals to 10−4. We trained ResNet architectures for 200 epochs, with batch size equals to 128 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 80, 120 and 160. We trained DenseNet models for 350 epochs with batch size of 64 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 150, 225 and 300. These hyperparameters are the same used to train the original ResNet and DenseNet models. The use of the push-pull layer in the first block of existing architectures does not require to change the hyperparameters used to train the original networks.
Furthermore, we experimented by enhancing the considered architectures with the anti-aliased sampling layers proposed by Zhang (2019) for improvement of shift-invariance. We compared the performance of what the push-pull layer and the anti-aliased sampling layer, and showed that their combination contributes to a further improvement of the robustness of ConvNets. We experimented with anti-aliasing filters of size 2×2, 3×3 and 5×5. We also trained WideResNet models (Zagoruyko & Komodakis, 2016) with and without the push-pull layer, using different data augmentations, namely cutout (Devries & Taylor, 2017), AutoAugment (Cubuk et al., 2018), and cutout plus AutoAugment. We show that the push-pull layer, although already providing the network with intrinsic robustness to corruptions, can be combined with data augmentation to achieve further robustness.
ImageNet. We considered ResNet-18 and ResNet-50 models in which we deployed the pushpull layer as a replacement of the first convolutional layer and of the convolutions inside the first residual block. In the case of ResNet-50, we replaced only the middle convolution of the bottleneck layers. We trained the networks using SGD optimization with Nesterov moment equals to 0.9 and weight decay of 5 · 10−4 for 90 epochs, with an initial learning rate of 0.1 that we decreased by a factor of 10 after 30, 60 and 85 epochs. We used a batch size of 256. For comparison purposes, we trained ResNet-18 and ResNet-50 models augmented with the anti-aliased sampling layers proposed by Zhang (2019) using the same hyperparameters and schedule of the learning rate that we employed for the training of the push-pull networks. For ResNet-50 with anti-aliasing, due to GPU memory limitations, we reduced the batch size to 128.
5 RESULTS AND DISCUSSION
5.1 CIFAR
Effect of the push-pull inhibition layer. In Table 1, we report the results of ResNet and DenseNet networks on the CIFAR and CIFAR-C/P test sets. The lower the numbers the better the performance. We tested architectures with different number of layers (and growing factors, for DenseNet). For each architecture, we trained the original model with convolutional layers and two models with the push-pull layer, one (-pp) with the push-pull layer in the first layer only and the other (-b1) with the push-pull layer replacing all convolutions in the first block of the network as well. The
measurements mCE, rCE and mFR (see Appendix B) are normalized with respect to the results of the baseline AlexNet.
The push-pull layer contributes to a substantial improvement of the robustness of the networks to corruptions and perturbations of the input images. Its effect, especially when it is deployed in the first block of existing ConvNets, is noticeable, with a reduction of the mean corruption error mCE by more than 10% in some cases. The improvement in robustness trades off with a generally small reduction of the classification accuracy on original (clean) test data. The presence of the pushpull layer has an important effect in reducing the relative corruption error rCE, i.e. the average gap between the classification error on clean data (on which the model is trained) and corrupted data. It indicates that models with push-pull inhibition are better in generalizing to data with heavy corruptions. In some cases, the relative error is decreased by more than 30%. The flip rate (mFR), computed on the CIFAR-P sequences, also improved for all models with push-pull layers.
Networks with the push-pull layers generally show an improvement of robustness to corruptions with high-frequency components. For instance, ResNet20-pp and ResNet20-b1 reduced the CE by 10% to 25% on noise (Gaussian, shot, impulse), up to 18% on glass blur, and between 15% and 20% on pixelate and jpeg compression corruptions with respect to that of the original ResNet20. Marginal improvements or lower robustness are obtained on low-frequency corruptions, such as defocus, motion and zoom blur (+2/6% of CE), snow, frost and fog (between −4% and +4% of CE). For detailed results per corruption and perturbation type, we refer the reader to Appendix C.
Inhibition and anti-aliasing layers. In order to further improve the robustness of existing ConvNets, we show that the push-pull inhibition can be used in combination with other techniques. We couple it with the anti-aliased sampling layer proposed by Zhang (2019), which showed to improve robustness to some image corruptions next to re-enforcing the shift-invariance property of ConvNets. It consists of a low-pass filter inserted before the pooling or strided convolutional operations. We used low-pass filters of different sizes, namely 2× 2 (Rect-2), 3× 3 (Tri-3) and 5× 5 (Bin-5). Figure 3 illustrates the results achieved by (Wide)ResNet models extended with the push-pull inhibition and anti-aliased sampling layers. In many cases the combined effects of the push-pull and anti-aliasing layers resulted in lowering either the corruption error CE or the flip probability FP . The Rect-2 filter ( marker), in particular, is effective on all models (with and without the pushpull layer), on the contrary of the Tri-3 and Bin-5 filters, which have larger sizes and in some cases worsen the robustness of the networks. This is due to the relatively small sizes of the images in the
CIFAR data set with respect to the low-pass filters, which overly smooth the intermediate response maps in the ConvNets. Models with anti-aliasing filters achieved less reduction of corruption error with respect to inhibition-augmented ConvNets: up to 10% on noise and around 2% on motion blur, but perform better (> 6%) than networks with inhibition on the translate perturbation.
Inhibition and data augmentation. We show that the embedding of the push-pull layer within existing architectures can be combined with data augmentation techniques to further improve the model robustness to different corruptions. In Table 2, we report the robustness results (CE and FR) of WideResNet models (with and without push-pull inhibition) trained with data augmentation techniques. We performed experiments with AutoAugment, cutout and a combination of them. The cutout augmentation does not increase the robustness of the models to input corruptions, while AutoAugment contributes to a noticeable reduction of the corruption error. This is in line with the findings by Yin et al. (2019). In all cases, inhibition layers contribute to further increase the robustness of networks trained with data augmentation. The best CE and FR values are obtained by models that deploy the push-pull layer in the first layer and in the first residual block, and are trained using AutoAugment plus cutout augmentations.
5.2 IMAGENET
In Table 3, we show the results achieved by ResNet models on the ImageNet validation set and on ImageNet-C/P benchmarks. The mCE and FR values are normalized with respect to the results of the baseline models ResNet-18 and ResNet-50, respectively.
On one hand, we observed a general decrease of performance of fine-tuned ResNet-18 models with push-pull and anti-aliasing layers on the ImageNet-C data set. On the other hand, the push-pull layer improves the robustness of ResNet-50 on the corruptions in ImageNet-C and on the perturbations in ImageNet-P. Results of the ResNet models extended with anti-aliasing layers decrease on the ImageNet-C data set with respect to those of the original ResNet model, while showing improved robustness to the perturbation of the ImageNet-P data set. Also for the ImageNet-C/P benchmarks, we noticed that the improvements of robustness achieved by the models with push-pull inhibition are directed towards high-frequency corruptions. Nonetheless, the complementary robustness shown by ResNet-50 architectures with push-pull inhibition and anti-aliasing layers suggest that they can be jointly deployed in existing models, as we also observed in the CIFAR experiments.
5.3 DISCUSSION
The push-pull inhibition component that we implemented into ConvNets allows for a sharper detection of features of interest in cases when the input signal is corrupted. In neuroscience studies, it was found that while neurons that exhibit push-pull inhibition achieve similar activations to preferred stimuli when compared to neurons that do not exhibit such inhibition, the fact that their activations are suppressed in noisy areas result in a more pronounced distinction between patterns of interest from others and in a sparser representation (Li et al., 2012). These may be considered as two desirable properties in machine learning models. In Figure 4, we show the histograms of the weights learned in convolutional and push-pull layers of several models. The push-pull layer acts as a regularizer and allows to learn generally sparser representations. We conjecture that one effect of the push-pull layer is the learning of smoother discriminant mapping functions due to sparser representations. This contributes to the increased generalization showed on corrupted data (Srivastava et al., 2014). While the resulting models may fit slightly less than models without push-pull inhibition on training data, they generalize better to unknown corruptions and perturbations.
The push-pull layer contributes to the improvement of the robustness of a particular architecture to input corruptions, with negligible reduction of classification accuracy on the original data. When classification systems are deployed in real-world tasks, where the acquisition of data is subject to corruptions due to sensor uncertainty/deterioration or to changing environments, it is desirable to provide them with mechanisms that can effectively deal with such degradation. From a computational cost point of view, during the training stage, the pull kernels are derived at each iteration by upsampling and negating the corresponding push kernels, which relatively slows down the training process. On a Nvidia V100 gpu, one training iteration with a batch of 256 ImageNet images takes 0.78, 0.85 and 0.95 seconds for ResNet50, ResNet50-pp and ResNet50-b1, respectively. In the testing stage, the pull kernels are loaded only once when the model is initiated. Extra computations are due only to the processing of the pull component. When the push-pull layer is used only in the first layer of the network or in the first block, the difference in computations is negligible. For inference, the processing takes 0.66, 0.67 and 0.63 seconds, respectively. The lowest test time of ResNet50-b1 is attributable to the sparser representation learned in the push-pull layer.
6 CONCLUSIONS
We demonstrated that the inclusion of a response inhibition mechanism, inspired by the push-pull neurons of the visual system of the brain, into existing ConvNets improves their robustness to corruptions and perturbations of the input that are not present in the training data. The improvement is mainly attributable to the filters in the push-pull layer that are less sensitive to noise, and have a regularization effect that makes the network learn a sparser representation. We carried out experiments on the CIFAR-C/P and ImageNet-C/P data sets and the results that we achieved (more than 10% of reduction of the error on corrupted and perturbed data, and 1% of reduction of the corruption error for fine-tuned networks) demonstrate the effectiveness of the push-pull inhibition layer.
A PUSH-PULL RESIDUAL AND DENSE LAYERS
In Figure 5, we show the structure of the composite residual and dense layers augmented with the push-pull inhibition. The architecture of the layers is the same of the original residual and dense layers, with the difference that all convolutions are replaced by push-pull filters.
B PERFORMANCE METRICS
We denote byEM the classification error of a modelM on the original clean (without perturbations) test set. Let us denote by EMc,s the corruption error achieved by the model M on images with corruption c ∈ C and severity level s, and denote by EMc the average error achieved across all severity sets of a given corruption c.
We compute the mean corruption error mCE as the average of the normalized errors achieved on all corruptions and severity levels. We use the error Ebaselinec,s achieved by a baseline network on the set of data with corruption c and severity s as a normalization factor for the corruption error:
mCEM = 1
|C| |C|∑ c=1 ∑5 s=1E M c,s∑5 s=1E baseline c,s
(2)
As proposed by Hendrycks & Dietterich (2019), we also evaluate the degradation of the classifier performance on corrupted data with respect to the results achieved on clean data and compute the relative mean corruption error rCE. It measures the relative gap between the error on clean and corrupted data and it is computed as:
rCEM = 1
|C| |C|∑ c=1
∑5 s=1 ( EMc,s − EM )∑5 s=1 ( Ebaselinec,s − Ebaseline
) (3) As a measure of performance on the CIFAR-P and ImageNet-P test sets, we compute the flip probability FP . It measures the probability that the outcome of the classification changes between consecutive frames of a sequence S due to a perturbation p. Formally, it is defined as:
FPMp = Px∼S(f(j) 6= f(xj−1)) (4) where xj is the j-th frame of the perturbation sequence. Similarly to the mCE, the flip rate FRMp = FP M p /FP baseline p is the normalized version of the FP .
C DETAILED RESULTS
C.1 RESULTS PER CORRUPTION TYPE
In Table 4, we show the detailed results achieved on the corruption sets in the CIFAR-C data set by network with and without push-pull inhibition. We also report the results achieved by networks that deploy the anti-aliased sampling filters of Zhang (2019). The push-pull filters are robust to high-frequency corruptions and can be effectively combined with anti-aliasing filters.
In Table 5, we report detailed results per corruption type achieved by WideResNet models for which the convolutions in the first layer and in the first residual block were replaced by push-pull filters, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor,
2017) and AutoAugment (Cubuk et al., 2018) techniques. The use of the push-pull filters can be combined with data augmentation to further improve the robustness of existing models.
In Table 6, we report the detailed classification error achieved per corruption type on ImageNet-C by models that deploy the push-pull layers in comparison with those achieved by models that use the antialiased sampling module proposed by Zhang (2019). The push-pull layers provide robustness for high-frequency corruptions, that is more evident in the case of the ResNet-50 models.
C.2 RESULTS PER PERTURBATION TYPE
We report in Table 7 the detailed performance results, in terms of flip probability, per perturbation type achieved on the CIFAR-P data set by ResNet models, of different depth, that deploy the pushpull layers, the anti-aliasing layers or a combination of them. The flip probability measures the likelihood that a model changes its predictions on consecutive frames of a video on which an incremental corruption is applied. The push-pull inhibition layers contribute to a substantial improvement of stability against high-frequency components, and benefit from the effect of the anti-aliasing filters on the translate perturbation.
In Table 8 we report detailed flip probability results per perturbation type achieved by WideResNet models that deploy push-pull filters in the first layer and in the first residual block to replace the original convolutions, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018) techniques. The push-pull inhibition and the different data augmentation techniques provide complementary contributions to the improvement of the model robustness and stability to perturbations. This demonstrates that the push-pull layer can be deployed in existing architectures and can be combined with other approaches to further improve the classification performance.
In Table 9 we report the detailed flip probability results achieved by ResNet models augmented with push-pull inhibition filters in comparison to those achieved by models that deploy anti-aliasing layers. The push-pull filters, especially in the case of ResNet-50, contribute to the improvement of the stability of the network prediction on consecutively perturbed frames and are particularly effective on high-frequency corruptions. | 1. What is the main contribution of the paper regarding image corruptions robustness enhancement?
2. What are the strengths and weaknesses of the proposed approach, particularly in its performance on different datasets?
3. Do you have any concerns or suggestions regarding the experiments and comparisons made in the paper?
4. How does the push-pull layer work, and how does it differ from other methods that improve generalization to corrupted images?
5. Can you explain the choice of alpha=1 and the effect of h on the results?
6. How do the results of the ablation models in Table 3 relate to the push-pull layer, and what can be concluded from them?
7. Are there any limitations or potential drawbacks to the proposed method, and how might they be addressed in future research? | Review | Review
The authors propose to enhance the robustness of convnets to image corruptions by a biologically inspired push-pull mechanism consisting of a convolutional filter coupled with an up-sampled version of the same filter with sign-flipped weights. The authors investigate how replacing the first convolutional layer or the first block in ResNet and DenseNet architectures with this push-pull layer affects generalization to CIFAR-C/P and ImageNet-C/P compared with the original model architecture. Furthermore, the authors investigate the effect of combining this model architecture with existing data augmentation techniques known to improve generalization to corrupted images.
Strengths:
Interesting idea taken from biological vision
Simple approach that does not increase number of parameters
Clear improvement on CIFAR-C/P
Weaknesses:
The proposed push-pull layer does not lead to an improvement on ImageNet-C/P
Experiments on ImageNet are not described well
Several control models are missing
Details on major issues (all of which would need to be addressed in a convincing manner for my score to change):
(1) The effect due to push-pull on ImageNet-C is negligible. A 1% improvement is tiny compared to typical effects of other methods on ImageNet-C. This failure to generalize to ImageNet undermines the significance of the results substantially. At the very least I would have expected a discussion of this issue. For people to get excited about this method despite the failure to scale to ImageNet, the paper would have to make a very convincing case why push-pull is still the right direction to be pursued and how it might eventually be scaled successfully to large-scale datasets like ImageNet.
(2) Improvements in FR on ImageNet-P (Table 3) could be caused exclusively by anti aliasing, not by push-pull. It is (a) not clear whether the models with anti aliasing that are listed in this table include the push-pull layer (and, if so, which version: -pp or -b1?) or not. If they include it, an ablation is missing where the authors use only anti aliasing but not push-pull. If they don't include it, I'm not sure what's the relevance of these three models in the context of the present paper. These questions would have to be clarified in the text and the ablation models should be added to Table 3.
(3) As the authors state, the push-pull component acts as bandpass filter, which seems to be in line with the generalization improvement being high for corruptions/perturbations with high-frequency components. To what extend are the improvements presented in the paper due to the specific design of the push-pull layers and to what extend they could they have been achieved by just bandpass-filtering the corrupted input images, using the original architectures. This would be an important control.
Minor comments
Eq. (1): How was \alpha=1 chosen? Did you experiment with different values for \alpha? Also, how does the value of h affect the results?
Fig. 2 would benefit from a color bar in the cross-correlation plots, as it is unclear if white or black corresponds to a high value
Fig. 3: The representation of the individual results makes it hard to assess the overall effect. Is there a net improvement due to the additional anti aliasing? It looks like for many, but not all, combinations of architectures and metrics the Rect-2 version looks better, but it's not clear by how much on average and what's the effect for the other two filter versions.
What does CE in Table 3 refer to? In Appendix B, only mCE and rCE are defined
Color bar on the kernels in Fig. 2 to better compare the push / pull kernels
Understandability of Figures 3, 4 and Tables 1, 2, 3 would be improved by introducing all abbreviations used in the caption (eg. -pp, b1, WRN-28-10-b1, Tri-3, etc.)
Section 3.1: “The decision of having pull kernels larger than the push kernels is motivated by neurophysiological findings (Li et al., 2012).” Since the reader of this paper might be not familiar with neurophysiological literature, you could expand a bit on the motivation of using larger pull kernels
Eq. 4 should read f(x_j) instead of f(j) |
ICLR | Title
Inhibition-augmented ConvNets
Abstract
Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs. The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.
1 INTRODUCTION
After AlexNet was proposed, several other more sophisticated Convolutional Networks (ConvNets), e.g. VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), ResNet (He et al., 2015), DenseNet (Huang et al., 2017), consecutively achieved lower error rates on classification benchmarks such as CIFAR (Krizhevsky, 2009), SVHN (Netzer et al., 2011) and ImageNet (Krizhevsky et al., 2012). The witnessed reduction of the classification error, however, was not accompanied by a similar increase of generalization and robustness to common corruptions or perturbations in the test images. When test images contain artefacts that are not present in the training data (e.g. shot noise, JPEG compression, blur), the performance of SOTA networks, like ResNet or DenseNet, degrade relatively more than that of AlexNet (Hendrycks & Dietterich, 2019).
Robustness of DeepNets and ConvNets is gaining attention from researchers in machine learning and computer vision. One point of analysis concerns adversarial attacks that consist of very small, visually imperceptible perturbations of the input signals such that confuse the classification model (Akhtar & Mian, 2018). Several adversarial attacks were proposed based on the knowledge of the classification models (Goodfellow et al., 2015) or on iterative methods (Kurakin et al., 2016; Madry et al., 2018). Class-specific and universal black box attacks (Moosavi-Dezfooli et al., 2017) were also proposed. As new types of adversarial perturbations are designed, defense algorithms need to be developed (Lu et al., 2017; Metzen et al., 2017).
In contrast to adversarial perturbations, common corruptions and perturbations are modifications of the input signals due to typical artefacts generated by sensor noise, illumination changes, transformations determined by changes of camera perspective (e.g. rotations or elastic transformations), quality loss due to compression, among others. These can be considered as average cases of ad-
versarial attacks and are very common in computer vision tasks. In this study, we focus on the robustness of ConvNets to common corruptions and perturbations and demonstrate how the use of inhibition-augmented filters increases the robustness of existing ConvNet models. We use a layer that implements a local-response suppression mechanism (Strisciuglio et al., 2020), named pushpull, to replace some convolutional layers in existing ConvNets. The design of this layer was inspired by neurophysiological evidence of the push-pull inhibition exhibited by some neurons in the early part of the visual system of the brain (Hirsch et al., 1998). Lauritzen & Miller (2003) stated that ‘push-pull inhibition [...] acts to sharpen spatial frequency tuning, [...] and increase the stability of cortical activity’. These neurons have inhibitory interneurons with receptive fields of opposite polarity. The interneurons suppress the responses of the associated neurons in noisy local patterns; that is they pull the push responses. From an engineering point of view, the push-pull inhibition can be considered as a band-pass filter. A computational model of the push-pull inhibition was introduced in image processing operators for contour and line (Azzopardi et al., 2014; Strisciuglio et al., 2019) detection that are robust to various types of noise. Early results on augmenting ConvNets with push-pull inhibition were reported in (Strisciuglio et al., 2020), where only the first convolutional layer of existing networks was replaced and preliminary experiments on the MNIST and CIFAR images were reported.
We deploy a push-pull layer in the state-of-the-art (wide-)ResNet and DenseNet architectures by replacing the convolutional layer in the entry layer of the networks and all convolutional layers inside the first residual or dense block. The number of layers and parameters of the modified networks remains the same of their original counterpart. We train several models with and without the push-pull layers, using the images from ImageNet and CIFAR training sets, and test them on the ImageNetC/P and CIFAR-10-C/P benchmark test sets (Hendrycks & Dietterich, 2019), which contain images with several types of corruption and perturbation. Furthermore, we combine the push-pull layer with other techniques to improve the robustness performance of ConvNets, namely the data augmentation techniques AutoAugment (Cubuk et al., 2018) and cutout (Devries & Taylor, 2017), and low-pass anti-aliasing filters (Zhang, 2019), of which we provide more details in Section 2. We show how combined data- and architecture-related actions have jointly a positive effect to further improve the robustness and generalization performance of existing models.
Our contributions are twofold: a) new residual and dense layers in which push-pull filters replace the original convolutions, b) a thorough analysis of the impact of the push-pull inhibition on the robustness of existing models, and of its combination with other strategies for model robustness.
The rest of the paper is organized as follows. We discuss related works in Section 2 and provide details of the push-pull layer and modified residual and dense layers in Section 3. We report the experiments and results in Section 4 and Section 5. Finally, we draw conclusions in Section 6.
2 RELATED WORKS
One common approach to increase the generalization of existing models is data augmentation. It provides a mechanism to increase the robustness to certain transformations included in the training process. One type of data augmentation, namely added Gaussian noise, does not guarantee robustness to other types of noise, e.g. shot noise (Zheng et al., 2016). In Vasiljevic et al. (2016), for instance, it was demonstrated that using one blurring type for data augmentation during training does not result in a model that is robust to other types of blurring. Moreover, in Geirhos et al. (2018), it was observed that heavy data augmentation can cause underfitting. Devries & Taylor (2017) proposed a technique named cutout that masks out random square patches of the training images and increases network performance. Although cutout reduces the classification error on benchmark test sets, we noticed that it does not improve robustness to common corruptions and perturbations. Lopes et al. (2019) proposed PatchGaussian, which combines the regularization abilities of cutout with data augmentation using Gaussian noise. Randomly located patches are selected from the input images and corrupted with Gaussian noise. Autoaugment (Cubuk et al., 2018; Lim et al., 2019) is a very effective strategy to learn an optimal set of augmentation policies from training data. Together with increased classification rate on benchmark test sets, it was shown to improve the robustness of the concerned models to various corruptions and perturbations (Yin et al., 2019). Recently, AugMix was proposed. It uses diverse augmentations and a Jensen-Shannon divergence consistency loss function to train models that have better generalization and uncertainty estimates (Hendrycks et al., 2020).
Data augmentation is in some cases very powerful. However, these techniques work around architectural or structural weaknesses of ConvNets. For example, using high-frequency noise for data augmentation (e.g. Gaussian noise) makes the network focuses on low frequency features that are discriminant instead of becoming explicitly robust to high-frequency corruptions (Yin et al., 2019). In order to improve the intrinsic robustness of ConvNets, it would be preferable to deploy architectural elements that are able to better detect the features of interest also when the input data is corrupted. For instance, common downsampling methods, like max-pooling and strided convolutions, introduce aliasing as they ignore the sampling theorem of Nyquist. Zhang (2019) used a low-pass filter before down-sampling and showed that it improves classification results on ImageNet and robustness to common corruptions of the input images. A layer of Gabor filters was recently shown to increase network robustness to adversarial attacks (Pérez et al., 2020).
3 PUSH-PULL INHIBITION IN CONVNETS
A ConvNet constitutes of L layers, each of which performs a non-linear transformation H(x) of the input x. In some architectures, H(x) is a composite function of convolutions, batch normalization (BN), rectifier linear unit (ReLU) and pooling. For example, a residual layer contains two convolutions, batch normalizations, ReLUs and one identity connection. We use a composite layer, called push-pull, that consists of two convolutions with kernels of different size and two ReLU functions.
3.1 PUSH-PULL LAYER
The response of the push-pull layer (Figure 1) is defined as the linear combination of the rectified responses of two convolutions of the input signal, one with an excitatory (push) kernel k(·) and one with an inhibitory (pull) kernel −k̂(·) (Strisciuglio et al., 2020). The responses of the push and pull components are rectified by means of the ReLU function, which introduces non-linearity, and combined: the pull response is subtracted from (i.e. it suppresses) the push response. In the forward step of the learning algorithm, the weights of each pull kernel are computed by upsampling and negating the weights of the corresponding push kernel. Since the pull kernels are directly inferred from the push kernels, the number of trainable parameters does not change. In the backward step, however, the implementation of the push-pull layer ensures that the gradient is back-
propagated to both push and pull components. While only the weights of the push kernels are updated, the effect of the gradient back-propagation incorporates the pull responses. We compute the response R of a push-pull layer to an input x as:
R(x) = Θ (x ? k)− αΘ ( x ? (−k̂h↑) ) (1)
where Θ(·) is the ReLU function, k is the push kernel and k̂h↑ is its upsampled (by a factor h) and negated version, i.e. the pull kernel. We set h = 2, that is the pull kernel has double width and height of the push kernel. The parameter α is a weighting factor for the inhibition response, which we set to 1 (i.e. same relevance as the push response). The decision of having pull kernels larger than the push kernels is motivated by neurophysiological findings (Li et al., 2012).
The combined effect of the push and pull kernels results in an increased robustness to the detection of features of interest in corrupted input signals. Figure 2 illustrates an example of the response maps of a conventional convolution filter and those of a push-pull filter on images corrupted with noise. The response maps of the push-pull filter on corrupted images correlate better with that of the clean image in comparison to the corresponding response maps of the convolution filter. For this illustration we selected a convolutional and a push kernel that have the same shape and whose parameters are learned in the first layer of a ResNet model, without and with push-pull layers
respectively, on the CIFAR data set. This allows to better and more directly demonstrate the effect that the pull inhibitory component has on the quality of the computed feature maps. The spatial frequency selectivity of the push-pull filter is sharper than that of a conventional convolutional layer, as it focuses on the frequency of interest and suppresses those outside a certain learned sub-band. In this way, noise induced artefacts can be filtered out and clearer feature maps are computed for further processing in subsequent layers of the network.
3.2 PUSH-PULL IN RESIDUAL AND DENSE LAYERS
We extended the implementation of the residual and dense composite layers, originally proposed by He et al. (2015) and Huang et al. (2017), respectively. In both cases, we substituted the convolutional layers with push-pull layers, which resulted in the push-pull residual layer and the push-pull dense layer.
The push-pull inhibition is a phenomenon that is exhibited by certain neurons in the early visual system. Evidence of it has been observed in area V1 of the visual cortex (Lauritzen & Miller, 2003), which can be compared to the early layers of a ConvNet. Thus, motivated by neuro-scientific findings, we deploy the modified push-pull residual and dense layers in the first block of the considered architectures. In principle, the push-pull layer may be used to substitute a convolutional layer at any depth in a ConvNet. In practice, however, we experienced that deploying push-pull layers at deeper layers considerably changes the training dynamics of the model, not contributing to result improvement, and the nominal hyperparameters used to train the original network need to be changed.
4 EXPERIMENTS
In the following, we refer as ResNet-L to a residual network with L layers (He et al., 2015) and as DenseNet-L-k to a densely connected network with L layers and growing factor k (Huang et al., 2017). The notation ‘-pp’ in the model name indicates the use of the push-pull layer that replaces the first convolutional layer. We use the suffix ‘-b1’ to denote that the concerned model deploys push-pull residual or dense layers in the first block of the network as well.
4.1 DATA SETS
We trained several models on the CIFAR and ImageNet data sets, and tested them on the original test sets and on those provided with the CIFAR-C/P and ImageNet-C/P benchmarks (Hendrycks & Dietterich, 2019). The CIFAR-C and ImageNet-C data sets are composed of 75 test sets of corrupted images, generated by applying 15 types of corruption with 5 levels of severity, to the original images. The corruptions are arranged in four categories: noise (Gaussian, shot and impulse noise), blur (defocus, glass, motion and zoom blur), weather (snow, frost, fog, brightness) and digital distortions (contrast, elastic transformation, pixelate, jpeg compression). The perturbations in CIFAR-P and ImageNet-P have a character of temporality. Starting from the corrupted images in CIFAR-C and ImageNet-C, sequences of 30 frames were generated by applying consecutive small perturbations. For each image in the original CIFAR and ImageNet validation sets, ten types of perturbed sequences
were created (Gaussian and shot noise, motion and zoom blur, snow, brightness, translate, rotate, tilt, scale). More details about the -C and -P data sets are reported by Hendrycks & Dietterich (2019).
4.2 EVALUATION
We applied the evaluation protocol proposed by Hendrycks & Dietterich (2019). Besides the classification error on the CIFAR and ImageNet, we evaluate the corruption error C, i.e. the classification error achieved on the CIFAR-C and ImageNet-C sets, and the flip-probability FP , that is the probability that the outcome of the classification changes between consecutive frames, on the sequences in the CIFAR-P and ImageNet-P sets. We compute the mean and relative corruption errors mCE and rCE and the normalized flip probability, i.e. the flip rate FR. For details about these metrics, we refer the reader to Appendix B and to the paper of Hendrycks & Dietterich (2019).
4.3 EXPERIMENTS
CIFAR. We trained several ResNet and DenseNet models and their versions with the push-pull layer, using the images from the CIFAR training set to which we applied a light data augmentation, consisting only of center-cropping and random horizontal flipping (Lee et al., 2015). For both (wide)ResNet- and DenseNet-based models we optimized the parameters with stochastic gradient descent (SGD), Nesterov momentum equals to 0.9 and weight decay equals to 10−4. We trained ResNet architectures for 200 epochs, with batch size equals to 128 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 80, 120 and 160. We trained DenseNet models for 350 epochs with batch size of 64 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 150, 225 and 300. These hyperparameters are the same used to train the original ResNet and DenseNet models. The use of the push-pull layer in the first block of existing architectures does not require to change the hyperparameters used to train the original networks.
Furthermore, we experimented by enhancing the considered architectures with the anti-aliased sampling layers proposed by Zhang (2019) for improvement of shift-invariance. We compared the performance of what the push-pull layer and the anti-aliased sampling layer, and showed that their combination contributes to a further improvement of the robustness of ConvNets. We experimented with anti-aliasing filters of size 2×2, 3×3 and 5×5. We also trained WideResNet models (Zagoruyko & Komodakis, 2016) with and without the push-pull layer, using different data augmentations, namely cutout (Devries & Taylor, 2017), AutoAugment (Cubuk et al., 2018), and cutout plus AutoAugment. We show that the push-pull layer, although already providing the network with intrinsic robustness to corruptions, can be combined with data augmentation to achieve further robustness.
ImageNet. We considered ResNet-18 and ResNet-50 models in which we deployed the pushpull layer as a replacement of the first convolutional layer and of the convolutions inside the first residual block. In the case of ResNet-50, we replaced only the middle convolution of the bottleneck layers. We trained the networks using SGD optimization with Nesterov moment equals to 0.9 and weight decay of 5 · 10−4 for 90 epochs, with an initial learning rate of 0.1 that we decreased by a factor of 10 after 30, 60 and 85 epochs. We used a batch size of 256. For comparison purposes, we trained ResNet-18 and ResNet-50 models augmented with the anti-aliased sampling layers proposed by Zhang (2019) using the same hyperparameters and schedule of the learning rate that we employed for the training of the push-pull networks. For ResNet-50 with anti-aliasing, due to GPU memory limitations, we reduced the batch size to 128.
5 RESULTS AND DISCUSSION
5.1 CIFAR
Effect of the push-pull inhibition layer. In Table 1, we report the results of ResNet and DenseNet networks on the CIFAR and CIFAR-C/P test sets. The lower the numbers the better the performance. We tested architectures with different number of layers (and growing factors, for DenseNet). For each architecture, we trained the original model with convolutional layers and two models with the push-pull layer, one (-pp) with the push-pull layer in the first layer only and the other (-b1) with the push-pull layer replacing all convolutions in the first block of the network as well. The
measurements mCE, rCE and mFR (see Appendix B) are normalized with respect to the results of the baseline AlexNet.
The push-pull layer contributes to a substantial improvement of the robustness of the networks to corruptions and perturbations of the input images. Its effect, especially when it is deployed in the first block of existing ConvNets, is noticeable, with a reduction of the mean corruption error mCE by more than 10% in some cases. The improvement in robustness trades off with a generally small reduction of the classification accuracy on original (clean) test data. The presence of the pushpull layer has an important effect in reducing the relative corruption error rCE, i.e. the average gap between the classification error on clean data (on which the model is trained) and corrupted data. It indicates that models with push-pull inhibition are better in generalizing to data with heavy corruptions. In some cases, the relative error is decreased by more than 30%. The flip rate (mFR), computed on the CIFAR-P sequences, also improved for all models with push-pull layers.
Networks with the push-pull layers generally show an improvement of robustness to corruptions with high-frequency components. For instance, ResNet20-pp and ResNet20-b1 reduced the CE by 10% to 25% on noise (Gaussian, shot, impulse), up to 18% on glass blur, and between 15% and 20% on pixelate and jpeg compression corruptions with respect to that of the original ResNet20. Marginal improvements or lower robustness are obtained on low-frequency corruptions, such as defocus, motion and zoom blur (+2/6% of CE), snow, frost and fog (between −4% and +4% of CE). For detailed results per corruption and perturbation type, we refer the reader to Appendix C.
Inhibition and anti-aliasing layers. In order to further improve the robustness of existing ConvNets, we show that the push-pull inhibition can be used in combination with other techniques. We couple it with the anti-aliased sampling layer proposed by Zhang (2019), which showed to improve robustness to some image corruptions next to re-enforcing the shift-invariance property of ConvNets. It consists of a low-pass filter inserted before the pooling or strided convolutional operations. We used low-pass filters of different sizes, namely 2× 2 (Rect-2), 3× 3 (Tri-3) and 5× 5 (Bin-5). Figure 3 illustrates the results achieved by (Wide)ResNet models extended with the push-pull inhibition and anti-aliased sampling layers. In many cases the combined effects of the push-pull and anti-aliasing layers resulted in lowering either the corruption error CE or the flip probability FP . The Rect-2 filter ( marker), in particular, is effective on all models (with and without the pushpull layer), on the contrary of the Tri-3 and Bin-5 filters, which have larger sizes and in some cases worsen the robustness of the networks. This is due to the relatively small sizes of the images in the
CIFAR data set with respect to the low-pass filters, which overly smooth the intermediate response maps in the ConvNets. Models with anti-aliasing filters achieved less reduction of corruption error with respect to inhibition-augmented ConvNets: up to 10% on noise and around 2% on motion blur, but perform better (> 6%) than networks with inhibition on the translate perturbation.
Inhibition and data augmentation. We show that the embedding of the push-pull layer within existing architectures can be combined with data augmentation techniques to further improve the model robustness to different corruptions. In Table 2, we report the robustness results (CE and FR) of WideResNet models (with and without push-pull inhibition) trained with data augmentation techniques. We performed experiments with AutoAugment, cutout and a combination of them. The cutout augmentation does not increase the robustness of the models to input corruptions, while AutoAugment contributes to a noticeable reduction of the corruption error. This is in line with the findings by Yin et al. (2019). In all cases, inhibition layers contribute to further increase the robustness of networks trained with data augmentation. The best CE and FR values are obtained by models that deploy the push-pull layer in the first layer and in the first residual block, and are trained using AutoAugment plus cutout augmentations.
5.2 IMAGENET
In Table 3, we show the results achieved by ResNet models on the ImageNet validation set and on ImageNet-C/P benchmarks. The mCE and FR values are normalized with respect to the results of the baseline models ResNet-18 and ResNet-50, respectively.
On one hand, we observed a general decrease of performance of fine-tuned ResNet-18 models with push-pull and anti-aliasing layers on the ImageNet-C data set. On the other hand, the push-pull layer improves the robustness of ResNet-50 on the corruptions in ImageNet-C and on the perturbations in ImageNet-P. Results of the ResNet models extended with anti-aliasing layers decrease on the ImageNet-C data set with respect to those of the original ResNet model, while showing improved robustness to the perturbation of the ImageNet-P data set. Also for the ImageNet-C/P benchmarks, we noticed that the improvements of robustness achieved by the models with push-pull inhibition are directed towards high-frequency corruptions. Nonetheless, the complementary robustness shown by ResNet-50 architectures with push-pull inhibition and anti-aliasing layers suggest that they can be jointly deployed in existing models, as we also observed in the CIFAR experiments.
5.3 DISCUSSION
The push-pull inhibition component that we implemented into ConvNets allows for a sharper detection of features of interest in cases when the input signal is corrupted. In neuroscience studies, it was found that while neurons that exhibit push-pull inhibition achieve similar activations to preferred stimuli when compared to neurons that do not exhibit such inhibition, the fact that their activations are suppressed in noisy areas result in a more pronounced distinction between patterns of interest from others and in a sparser representation (Li et al., 2012). These may be considered as two desirable properties in machine learning models. In Figure 4, we show the histograms of the weights learned in convolutional and push-pull layers of several models. The push-pull layer acts as a regularizer and allows to learn generally sparser representations. We conjecture that one effect of the push-pull layer is the learning of smoother discriminant mapping functions due to sparser representations. This contributes to the increased generalization showed on corrupted data (Srivastava et al., 2014). While the resulting models may fit slightly less than models without push-pull inhibition on training data, they generalize better to unknown corruptions and perturbations.
The push-pull layer contributes to the improvement of the robustness of a particular architecture to input corruptions, with negligible reduction of classification accuracy on the original data. When classification systems are deployed in real-world tasks, where the acquisition of data is subject to corruptions due to sensor uncertainty/deterioration or to changing environments, it is desirable to provide them with mechanisms that can effectively deal with such degradation. From a computational cost point of view, during the training stage, the pull kernels are derived at each iteration by upsampling and negating the corresponding push kernels, which relatively slows down the training process. On a Nvidia V100 gpu, one training iteration with a batch of 256 ImageNet images takes 0.78, 0.85 and 0.95 seconds for ResNet50, ResNet50-pp and ResNet50-b1, respectively. In the testing stage, the pull kernels are loaded only once when the model is initiated. Extra computations are due only to the processing of the pull component. When the push-pull layer is used only in the first layer of the network or in the first block, the difference in computations is negligible. For inference, the processing takes 0.66, 0.67 and 0.63 seconds, respectively. The lowest test time of ResNet50-b1 is attributable to the sparser representation learned in the push-pull layer.
6 CONCLUSIONS
We demonstrated that the inclusion of a response inhibition mechanism, inspired by the push-pull neurons of the visual system of the brain, into existing ConvNets improves their robustness to corruptions and perturbations of the input that are not present in the training data. The improvement is mainly attributable to the filters in the push-pull layer that are less sensitive to noise, and have a regularization effect that makes the network learn a sparser representation. We carried out experiments on the CIFAR-C/P and ImageNet-C/P data sets and the results that we achieved (more than 10% of reduction of the error on corrupted and perturbed data, and 1% of reduction of the corruption error for fine-tuned networks) demonstrate the effectiveness of the push-pull inhibition layer.
A PUSH-PULL RESIDUAL AND DENSE LAYERS
In Figure 5, we show the structure of the composite residual and dense layers augmented with the push-pull inhibition. The architecture of the layers is the same of the original residual and dense layers, with the difference that all convolutions are replaced by push-pull filters.
B PERFORMANCE METRICS
We denote byEM the classification error of a modelM on the original clean (without perturbations) test set. Let us denote by EMc,s the corruption error achieved by the model M on images with corruption c ∈ C and severity level s, and denote by EMc the average error achieved across all severity sets of a given corruption c.
We compute the mean corruption error mCE as the average of the normalized errors achieved on all corruptions and severity levels. We use the error Ebaselinec,s achieved by a baseline network on the set of data with corruption c and severity s as a normalization factor for the corruption error:
mCEM = 1
|C| |C|∑ c=1 ∑5 s=1E M c,s∑5 s=1E baseline c,s
(2)
As proposed by Hendrycks & Dietterich (2019), we also evaluate the degradation of the classifier performance on corrupted data with respect to the results achieved on clean data and compute the relative mean corruption error rCE. It measures the relative gap between the error on clean and corrupted data and it is computed as:
rCEM = 1
|C| |C|∑ c=1
∑5 s=1 ( EMc,s − EM )∑5 s=1 ( Ebaselinec,s − Ebaseline
) (3) As a measure of performance on the CIFAR-P and ImageNet-P test sets, we compute the flip probability FP . It measures the probability that the outcome of the classification changes between consecutive frames of a sequence S due to a perturbation p. Formally, it is defined as:
FPMp = Px∼S(f(j) 6= f(xj−1)) (4) where xj is the j-th frame of the perturbation sequence. Similarly to the mCE, the flip rate FRMp = FP M p /FP baseline p is the normalized version of the FP .
C DETAILED RESULTS
C.1 RESULTS PER CORRUPTION TYPE
In Table 4, we show the detailed results achieved on the corruption sets in the CIFAR-C data set by network with and without push-pull inhibition. We also report the results achieved by networks that deploy the anti-aliased sampling filters of Zhang (2019). The push-pull filters are robust to high-frequency corruptions and can be effectively combined with anti-aliasing filters.
In Table 5, we report detailed results per corruption type achieved by WideResNet models for which the convolutions in the first layer and in the first residual block were replaced by push-pull filters, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor,
2017) and AutoAugment (Cubuk et al., 2018) techniques. The use of the push-pull filters can be combined with data augmentation to further improve the robustness of existing models.
In Table 6, we report the detailed classification error achieved per corruption type on ImageNet-C by models that deploy the push-pull layers in comparison with those achieved by models that use the antialiased sampling module proposed by Zhang (2019). The push-pull layers provide robustness for high-frequency corruptions, that is more evident in the case of the ResNet-50 models.
C.2 RESULTS PER PERTURBATION TYPE
We report in Table 7 the detailed performance results, in terms of flip probability, per perturbation type achieved on the CIFAR-P data set by ResNet models, of different depth, that deploy the pushpull layers, the anti-aliasing layers or a combination of them. The flip probability measures the likelihood that a model changes its predictions on consecutive frames of a video on which an incremental corruption is applied. The push-pull inhibition layers contribute to a substantial improvement of stability against high-frequency components, and benefit from the effect of the anti-aliasing filters on the translate perturbation.
In Table 8 we report detailed flip probability results per perturbation type achieved by WideResNet models that deploy push-pull filters in the first layer and in the first residual block to replace the original convolutions, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018) techniques. The push-pull inhibition and the different data augmentation techniques provide complementary contributions to the improvement of the model robustness and stability to perturbations. This demonstrates that the push-pull layer can be deployed in existing architectures and can be combined with other approaches to further improve the classification performance.
In Table 9 we report the detailed flip probability results achieved by ResNet models augmented with push-pull inhibition filters in comparison to those achieved by models that deploy anti-aliasing layers. The push-pull filters, especially in the case of ResNet-50, contribute to the improvement of the stability of the network prediction on consecutively perturbed frames and are particularly effective on high-frequency corruptions. | 1. What is the focus of the paper regarding deep learning modules?
2. What are the strengths and weaknesses of the proposed push-pull module?
3. Do you have any concerns about how the module affects clean data?
4. How does the reviewer assess the novelty and significance of the work compared to prior research?
5. Are there any questions regarding the implementation and efficiency of the module? | Review | Review
Summary:
Inspired by push-pull circuits in early visual cortex and retina, the authors develop a push-pull module that can be inserted into deep nets. They hypothesize that since push-pull circuits may act as band-pass filters, their module might control for domain shifts caused by noise. They test on clean and corrupted versions of CIFAR and ImageNet and find that the module hurts in evaluations of the former but helps in evaluations of the latter.
I'm impressed by the simple implementation of the module, but as I mention in the weaknesses, I have trouble evaluating the impact of this method since it does worse on clean data. I do not feel like this paper presents a clear direction for how to address robustness beyond throwing different regularizations at the problem. If that is the only way forward, then perhaps there's a need for a systematic evaluation of regularizations wrt robustness.
Strengths:
The implementation looks to be fairly efficient. There's a lot of evaluations of different corruption datasets. The figures are pretty.
Weaknesses:
"These can be considered as average cases of adversarial attacks and are very common in vision tasks." I take issue with this sentence. Are you saying that domain shifts of p(data) should be considered adversarial attacks? This is by definition of [1] not the case right?
It looks like the "push-pull" that you're implementing is a tuned surround suppression. I'm curious about the distinction (if any) you're drawing between the two? The important difference to me is that surround suppression is recurrent and purely modulatory, whereas the operation you're implementing does not appear to be. It looks like the cited Li et al., 2012 is examining tuning curve broadness rather than spatial extents of "push-pull" as it is presented here, although perhaps I'm misreading it. Regardless, I'd like more discussion of this issue, as there is a large literature on extra-classical receptive field mechanisms and effects in brains + their extensions to models that would need to be included as related work.
Unfortunately the proposed model does worse on clean data. This is a real problem with proposed solutions to robustness. More often than not, they exhibit worse performance on clean data and marginal improvements on noisy data. The problem with this is that the parsimonious explanation for the new algorithm is that it is just another regularization for object classification. What if you train with more kinds of augmentations? How does this compare to every off-the-shelf regularization that is out there? There is an enormous search space for constraining model expressivity, and there is no evidence here that the proposed method is doing something above-and-beyond this.
The authors seemingly exploit this huge regularization search space by combining their method with anti-aliasing. They find marginally worse performance on the clean datasets but marginally better "robustness". I'm struggiling to figure out if there's a path forward for robustness in deepnets, or if it's just a matter of combining arbitrary regularizations.
Do you think that the sparsity you're seeing in Fig 4 is reflecting the expressivity trade-off that you're observing in your results? Is there a way to get the best of both worlds? Should the observed sparsity and its impact on performance be interpreted as if models were trained with l1 regularization? How is the push-pull different than a weight decay or activity regularization?
[1] Szegedy C et al. 2013. Intriguing properties of neural networks. |
ICLR | Title
Inhibition-augmented ConvNets
Abstract
Convolutional Networks (ConvNets) suffer from insufficient robustness to common corruptions and perturbations of the input, unseen during training. We address this problem by including a form of response inhibition in the processing of the early layers of existing ConvNets and assess the resulting representation power and generalization on corrupted inputs. The considered inhibition mechanism consists of a non-linear computation that is inspired by the push-pull inhibition exhibited by some neurons in the visual system of the brain. In practice, each convolutional filter (push) in the early layers of conventional ConvNets is coupled with another filter (pull), which responds to the preferred pattern of the corresponding push filter but of opposite contrast. The rectified responses of the push and pull filter pairs are then combined by a linear function. This results in a representation that suppresses responses to noisy patterns (e.g. texture, Gaussian, shot, distortion, and others) and accentuates responses to preferred patterns. We deploy the layer into existing architectures, (Wide-)ResNet and DenseNet, and propose new residual and dense push-pull layers. We demonstrate that ConvNets that embed this inhibition into the initial layers are able to learn representations that are robust to several types of input corruptions. We validated the approach on the ImageNet and CIFAR data sets and their corrupted and perturbed versions, ImageNet-C/P and CIFAR-C/P. It turns out that the push-pull inhibition enhances the overall robustness and generalization of ConvNets to corrupted and perturbed input data. Besides the improvement in generalization, notable is the fact that ConvNets with push-pull inhibition have a sparser representation than conventional ones without inhibition. The code and trained models will be made available.
1 INTRODUCTION
After AlexNet was proposed, several other more sophisticated Convolutional Networks (ConvNets), e.g. VGGNet (Simonyan & Zisserman, 2014), GoogleNet (Szegedy et al., 2015), ResNet (He et al., 2015), DenseNet (Huang et al., 2017), consecutively achieved lower error rates on classification benchmarks such as CIFAR (Krizhevsky, 2009), SVHN (Netzer et al., 2011) and ImageNet (Krizhevsky et al., 2012). The witnessed reduction of the classification error, however, was not accompanied by a similar increase of generalization and robustness to common corruptions or perturbations in the test images. When test images contain artefacts that are not present in the training data (e.g. shot noise, JPEG compression, blur), the performance of SOTA networks, like ResNet or DenseNet, degrade relatively more than that of AlexNet (Hendrycks & Dietterich, 2019).
Robustness of DeepNets and ConvNets is gaining attention from researchers in machine learning and computer vision. One point of analysis concerns adversarial attacks that consist of very small, visually imperceptible perturbations of the input signals such that confuse the classification model (Akhtar & Mian, 2018). Several adversarial attacks were proposed based on the knowledge of the classification models (Goodfellow et al., 2015) or on iterative methods (Kurakin et al., 2016; Madry et al., 2018). Class-specific and universal black box attacks (Moosavi-Dezfooli et al., 2017) were also proposed. As new types of adversarial perturbations are designed, defense algorithms need to be developed (Lu et al., 2017; Metzen et al., 2017).
In contrast to adversarial perturbations, common corruptions and perturbations are modifications of the input signals due to typical artefacts generated by sensor noise, illumination changes, transformations determined by changes of camera perspective (e.g. rotations or elastic transformations), quality loss due to compression, among others. These can be considered as average cases of ad-
versarial attacks and are very common in computer vision tasks. In this study, we focus on the robustness of ConvNets to common corruptions and perturbations and demonstrate how the use of inhibition-augmented filters increases the robustness of existing ConvNet models. We use a layer that implements a local-response suppression mechanism (Strisciuglio et al., 2020), named pushpull, to replace some convolutional layers in existing ConvNets. The design of this layer was inspired by neurophysiological evidence of the push-pull inhibition exhibited by some neurons in the early part of the visual system of the brain (Hirsch et al., 1998). Lauritzen & Miller (2003) stated that ‘push-pull inhibition [...] acts to sharpen spatial frequency tuning, [...] and increase the stability of cortical activity’. These neurons have inhibitory interneurons with receptive fields of opposite polarity. The interneurons suppress the responses of the associated neurons in noisy local patterns; that is they pull the push responses. From an engineering point of view, the push-pull inhibition can be considered as a band-pass filter. A computational model of the push-pull inhibition was introduced in image processing operators for contour and line (Azzopardi et al., 2014; Strisciuglio et al., 2019) detection that are robust to various types of noise. Early results on augmenting ConvNets with push-pull inhibition were reported in (Strisciuglio et al., 2020), where only the first convolutional layer of existing networks was replaced and preliminary experiments on the MNIST and CIFAR images were reported.
We deploy a push-pull layer in the state-of-the-art (wide-)ResNet and DenseNet architectures by replacing the convolutional layer in the entry layer of the networks and all convolutional layers inside the first residual or dense block. The number of layers and parameters of the modified networks remains the same of their original counterpart. We train several models with and without the push-pull layers, using the images from ImageNet and CIFAR training sets, and test them on the ImageNetC/P and CIFAR-10-C/P benchmark test sets (Hendrycks & Dietterich, 2019), which contain images with several types of corruption and perturbation. Furthermore, we combine the push-pull layer with other techniques to improve the robustness performance of ConvNets, namely the data augmentation techniques AutoAugment (Cubuk et al., 2018) and cutout (Devries & Taylor, 2017), and low-pass anti-aliasing filters (Zhang, 2019), of which we provide more details in Section 2. We show how combined data- and architecture-related actions have jointly a positive effect to further improve the robustness and generalization performance of existing models.
Our contributions are twofold: a) new residual and dense layers in which push-pull filters replace the original convolutions, b) a thorough analysis of the impact of the push-pull inhibition on the robustness of existing models, and of its combination with other strategies for model robustness.
The rest of the paper is organized as follows. We discuss related works in Section 2 and provide details of the push-pull layer and modified residual and dense layers in Section 3. We report the experiments and results in Section 4 and Section 5. Finally, we draw conclusions in Section 6.
2 RELATED WORKS
One common approach to increase the generalization of existing models is data augmentation. It provides a mechanism to increase the robustness to certain transformations included in the training process. One type of data augmentation, namely added Gaussian noise, does not guarantee robustness to other types of noise, e.g. shot noise (Zheng et al., 2016). In Vasiljevic et al. (2016), for instance, it was demonstrated that using one blurring type for data augmentation during training does not result in a model that is robust to other types of blurring. Moreover, in Geirhos et al. (2018), it was observed that heavy data augmentation can cause underfitting. Devries & Taylor (2017) proposed a technique named cutout that masks out random square patches of the training images and increases network performance. Although cutout reduces the classification error on benchmark test sets, we noticed that it does not improve robustness to common corruptions and perturbations. Lopes et al. (2019) proposed PatchGaussian, which combines the regularization abilities of cutout with data augmentation using Gaussian noise. Randomly located patches are selected from the input images and corrupted with Gaussian noise. Autoaugment (Cubuk et al., 2018; Lim et al., 2019) is a very effective strategy to learn an optimal set of augmentation policies from training data. Together with increased classification rate on benchmark test sets, it was shown to improve the robustness of the concerned models to various corruptions and perturbations (Yin et al., 2019). Recently, AugMix was proposed. It uses diverse augmentations and a Jensen-Shannon divergence consistency loss function to train models that have better generalization and uncertainty estimates (Hendrycks et al., 2020).
Data augmentation is in some cases very powerful. However, these techniques work around architectural or structural weaknesses of ConvNets. For example, using high-frequency noise for data augmentation (e.g. Gaussian noise) makes the network focuses on low frequency features that are discriminant instead of becoming explicitly robust to high-frequency corruptions (Yin et al., 2019). In order to improve the intrinsic robustness of ConvNets, it would be preferable to deploy architectural elements that are able to better detect the features of interest also when the input data is corrupted. For instance, common downsampling methods, like max-pooling and strided convolutions, introduce aliasing as they ignore the sampling theorem of Nyquist. Zhang (2019) used a low-pass filter before down-sampling and showed that it improves classification results on ImageNet and robustness to common corruptions of the input images. A layer of Gabor filters was recently shown to increase network robustness to adversarial attacks (Pérez et al., 2020).
3 PUSH-PULL INHIBITION IN CONVNETS
A ConvNet constitutes of L layers, each of which performs a non-linear transformation H(x) of the input x. In some architectures, H(x) is a composite function of convolutions, batch normalization (BN), rectifier linear unit (ReLU) and pooling. For example, a residual layer contains two convolutions, batch normalizations, ReLUs and one identity connection. We use a composite layer, called push-pull, that consists of two convolutions with kernels of different size and two ReLU functions.
3.1 PUSH-PULL LAYER
The response of the push-pull layer (Figure 1) is defined as the linear combination of the rectified responses of two convolutions of the input signal, one with an excitatory (push) kernel k(·) and one with an inhibitory (pull) kernel −k̂(·) (Strisciuglio et al., 2020). The responses of the push and pull components are rectified by means of the ReLU function, which introduces non-linearity, and combined: the pull response is subtracted from (i.e. it suppresses) the push response. In the forward step of the learning algorithm, the weights of each pull kernel are computed by upsampling and negating the weights of the corresponding push kernel. Since the pull kernels are directly inferred from the push kernels, the number of trainable parameters does not change. In the backward step, however, the implementation of the push-pull layer ensures that the gradient is back-
propagated to both push and pull components. While only the weights of the push kernels are updated, the effect of the gradient back-propagation incorporates the pull responses. We compute the response R of a push-pull layer to an input x as:
R(x) = Θ (x ? k)− αΘ ( x ? (−k̂h↑) ) (1)
where Θ(·) is the ReLU function, k is the push kernel and k̂h↑ is its upsampled (by a factor h) and negated version, i.e. the pull kernel. We set h = 2, that is the pull kernel has double width and height of the push kernel. The parameter α is a weighting factor for the inhibition response, which we set to 1 (i.e. same relevance as the push response). The decision of having pull kernels larger than the push kernels is motivated by neurophysiological findings (Li et al., 2012).
The combined effect of the push and pull kernels results in an increased robustness to the detection of features of interest in corrupted input signals. Figure 2 illustrates an example of the response maps of a conventional convolution filter and those of a push-pull filter on images corrupted with noise. The response maps of the push-pull filter on corrupted images correlate better with that of the clean image in comparison to the corresponding response maps of the convolution filter. For this illustration we selected a convolutional and a push kernel that have the same shape and whose parameters are learned in the first layer of a ResNet model, without and with push-pull layers
respectively, on the CIFAR data set. This allows to better and more directly demonstrate the effect that the pull inhibitory component has on the quality of the computed feature maps. The spatial frequency selectivity of the push-pull filter is sharper than that of a conventional convolutional layer, as it focuses on the frequency of interest and suppresses those outside a certain learned sub-band. In this way, noise induced artefacts can be filtered out and clearer feature maps are computed for further processing in subsequent layers of the network.
3.2 PUSH-PULL IN RESIDUAL AND DENSE LAYERS
We extended the implementation of the residual and dense composite layers, originally proposed by He et al. (2015) and Huang et al. (2017), respectively. In both cases, we substituted the convolutional layers with push-pull layers, which resulted in the push-pull residual layer and the push-pull dense layer.
The push-pull inhibition is a phenomenon that is exhibited by certain neurons in the early visual system. Evidence of it has been observed in area V1 of the visual cortex (Lauritzen & Miller, 2003), which can be compared to the early layers of a ConvNet. Thus, motivated by neuro-scientific findings, we deploy the modified push-pull residual and dense layers in the first block of the considered architectures. In principle, the push-pull layer may be used to substitute a convolutional layer at any depth in a ConvNet. In practice, however, we experienced that deploying push-pull layers at deeper layers considerably changes the training dynamics of the model, not contributing to result improvement, and the nominal hyperparameters used to train the original network need to be changed.
4 EXPERIMENTS
In the following, we refer as ResNet-L to a residual network with L layers (He et al., 2015) and as DenseNet-L-k to a densely connected network with L layers and growing factor k (Huang et al., 2017). The notation ‘-pp’ in the model name indicates the use of the push-pull layer that replaces the first convolutional layer. We use the suffix ‘-b1’ to denote that the concerned model deploys push-pull residual or dense layers in the first block of the network as well.
4.1 DATA SETS
We trained several models on the CIFAR and ImageNet data sets, and tested them on the original test sets and on those provided with the CIFAR-C/P and ImageNet-C/P benchmarks (Hendrycks & Dietterich, 2019). The CIFAR-C and ImageNet-C data sets are composed of 75 test sets of corrupted images, generated by applying 15 types of corruption with 5 levels of severity, to the original images. The corruptions are arranged in four categories: noise (Gaussian, shot and impulse noise), blur (defocus, glass, motion and zoom blur), weather (snow, frost, fog, brightness) and digital distortions (contrast, elastic transformation, pixelate, jpeg compression). The perturbations in CIFAR-P and ImageNet-P have a character of temporality. Starting from the corrupted images in CIFAR-C and ImageNet-C, sequences of 30 frames were generated by applying consecutive small perturbations. For each image in the original CIFAR and ImageNet validation sets, ten types of perturbed sequences
were created (Gaussian and shot noise, motion and zoom blur, snow, brightness, translate, rotate, tilt, scale). More details about the -C and -P data sets are reported by Hendrycks & Dietterich (2019).
4.2 EVALUATION
We applied the evaluation protocol proposed by Hendrycks & Dietterich (2019). Besides the classification error on the CIFAR and ImageNet, we evaluate the corruption error C, i.e. the classification error achieved on the CIFAR-C and ImageNet-C sets, and the flip-probability FP , that is the probability that the outcome of the classification changes between consecutive frames, on the sequences in the CIFAR-P and ImageNet-P sets. We compute the mean and relative corruption errors mCE and rCE and the normalized flip probability, i.e. the flip rate FR. For details about these metrics, we refer the reader to Appendix B and to the paper of Hendrycks & Dietterich (2019).
4.3 EXPERIMENTS
CIFAR. We trained several ResNet and DenseNet models and their versions with the push-pull layer, using the images from the CIFAR training set to which we applied a light data augmentation, consisting only of center-cropping and random horizontal flipping (Lee et al., 2015). For both (wide)ResNet- and DenseNet-based models we optimized the parameters with stochastic gradient descent (SGD), Nesterov momentum equals to 0.9 and weight decay equals to 10−4. We trained ResNet architectures for 200 epochs, with batch size equals to 128 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 80, 120 and 160. We trained DenseNet models for 350 epochs with batch size of 64 and initial learning rate of 0.1, which we decreased by a factor of 10 at epochs 150, 225 and 300. These hyperparameters are the same used to train the original ResNet and DenseNet models. The use of the push-pull layer in the first block of existing architectures does not require to change the hyperparameters used to train the original networks.
Furthermore, we experimented by enhancing the considered architectures with the anti-aliased sampling layers proposed by Zhang (2019) for improvement of shift-invariance. We compared the performance of what the push-pull layer and the anti-aliased sampling layer, and showed that their combination contributes to a further improvement of the robustness of ConvNets. We experimented with anti-aliasing filters of size 2×2, 3×3 and 5×5. We also trained WideResNet models (Zagoruyko & Komodakis, 2016) with and without the push-pull layer, using different data augmentations, namely cutout (Devries & Taylor, 2017), AutoAugment (Cubuk et al., 2018), and cutout plus AutoAugment. We show that the push-pull layer, although already providing the network with intrinsic robustness to corruptions, can be combined with data augmentation to achieve further robustness.
ImageNet. We considered ResNet-18 and ResNet-50 models in which we deployed the pushpull layer as a replacement of the first convolutional layer and of the convolutions inside the first residual block. In the case of ResNet-50, we replaced only the middle convolution of the bottleneck layers. We trained the networks using SGD optimization with Nesterov moment equals to 0.9 and weight decay of 5 · 10−4 for 90 epochs, with an initial learning rate of 0.1 that we decreased by a factor of 10 after 30, 60 and 85 epochs. We used a batch size of 256. For comparison purposes, we trained ResNet-18 and ResNet-50 models augmented with the anti-aliased sampling layers proposed by Zhang (2019) using the same hyperparameters and schedule of the learning rate that we employed for the training of the push-pull networks. For ResNet-50 with anti-aliasing, due to GPU memory limitations, we reduced the batch size to 128.
5 RESULTS AND DISCUSSION
5.1 CIFAR
Effect of the push-pull inhibition layer. In Table 1, we report the results of ResNet and DenseNet networks on the CIFAR and CIFAR-C/P test sets. The lower the numbers the better the performance. We tested architectures with different number of layers (and growing factors, for DenseNet). For each architecture, we trained the original model with convolutional layers and two models with the push-pull layer, one (-pp) with the push-pull layer in the first layer only and the other (-b1) with the push-pull layer replacing all convolutions in the first block of the network as well. The
measurements mCE, rCE and mFR (see Appendix B) are normalized with respect to the results of the baseline AlexNet.
The push-pull layer contributes to a substantial improvement of the robustness of the networks to corruptions and perturbations of the input images. Its effect, especially when it is deployed in the first block of existing ConvNets, is noticeable, with a reduction of the mean corruption error mCE by more than 10% in some cases. The improvement in robustness trades off with a generally small reduction of the classification accuracy on original (clean) test data. The presence of the pushpull layer has an important effect in reducing the relative corruption error rCE, i.e. the average gap between the classification error on clean data (on which the model is trained) and corrupted data. It indicates that models with push-pull inhibition are better in generalizing to data with heavy corruptions. In some cases, the relative error is decreased by more than 30%. The flip rate (mFR), computed on the CIFAR-P sequences, also improved for all models with push-pull layers.
Networks with the push-pull layers generally show an improvement of robustness to corruptions with high-frequency components. For instance, ResNet20-pp and ResNet20-b1 reduced the CE by 10% to 25% on noise (Gaussian, shot, impulse), up to 18% on glass blur, and between 15% and 20% on pixelate and jpeg compression corruptions with respect to that of the original ResNet20. Marginal improvements or lower robustness are obtained on low-frequency corruptions, such as defocus, motion and zoom blur (+2/6% of CE), snow, frost and fog (between −4% and +4% of CE). For detailed results per corruption and perturbation type, we refer the reader to Appendix C.
Inhibition and anti-aliasing layers. In order to further improve the robustness of existing ConvNets, we show that the push-pull inhibition can be used in combination with other techniques. We couple it with the anti-aliased sampling layer proposed by Zhang (2019), which showed to improve robustness to some image corruptions next to re-enforcing the shift-invariance property of ConvNets. It consists of a low-pass filter inserted before the pooling or strided convolutional operations. We used low-pass filters of different sizes, namely 2× 2 (Rect-2), 3× 3 (Tri-3) and 5× 5 (Bin-5). Figure 3 illustrates the results achieved by (Wide)ResNet models extended with the push-pull inhibition and anti-aliased sampling layers. In many cases the combined effects of the push-pull and anti-aliasing layers resulted in lowering either the corruption error CE or the flip probability FP . The Rect-2 filter ( marker), in particular, is effective on all models (with and without the pushpull layer), on the contrary of the Tri-3 and Bin-5 filters, which have larger sizes and in some cases worsen the robustness of the networks. This is due to the relatively small sizes of the images in the
CIFAR data set with respect to the low-pass filters, which overly smooth the intermediate response maps in the ConvNets. Models with anti-aliasing filters achieved less reduction of corruption error with respect to inhibition-augmented ConvNets: up to 10% on noise and around 2% on motion blur, but perform better (> 6%) than networks with inhibition on the translate perturbation.
Inhibition and data augmentation. We show that the embedding of the push-pull layer within existing architectures can be combined with data augmentation techniques to further improve the model robustness to different corruptions. In Table 2, we report the robustness results (CE and FR) of WideResNet models (with and without push-pull inhibition) trained with data augmentation techniques. We performed experiments with AutoAugment, cutout and a combination of them. The cutout augmentation does not increase the robustness of the models to input corruptions, while AutoAugment contributes to a noticeable reduction of the corruption error. This is in line with the findings by Yin et al. (2019). In all cases, inhibition layers contribute to further increase the robustness of networks trained with data augmentation. The best CE and FR values are obtained by models that deploy the push-pull layer in the first layer and in the first residual block, and are trained using AutoAugment plus cutout augmentations.
5.2 IMAGENET
In Table 3, we show the results achieved by ResNet models on the ImageNet validation set and on ImageNet-C/P benchmarks. The mCE and FR values are normalized with respect to the results of the baseline models ResNet-18 and ResNet-50, respectively.
On one hand, we observed a general decrease of performance of fine-tuned ResNet-18 models with push-pull and anti-aliasing layers on the ImageNet-C data set. On the other hand, the push-pull layer improves the robustness of ResNet-50 on the corruptions in ImageNet-C and on the perturbations in ImageNet-P. Results of the ResNet models extended with anti-aliasing layers decrease on the ImageNet-C data set with respect to those of the original ResNet model, while showing improved robustness to the perturbation of the ImageNet-P data set. Also for the ImageNet-C/P benchmarks, we noticed that the improvements of robustness achieved by the models with push-pull inhibition are directed towards high-frequency corruptions. Nonetheless, the complementary robustness shown by ResNet-50 architectures with push-pull inhibition and anti-aliasing layers suggest that they can be jointly deployed in existing models, as we also observed in the CIFAR experiments.
5.3 DISCUSSION
The push-pull inhibition component that we implemented into ConvNets allows for a sharper detection of features of interest in cases when the input signal is corrupted. In neuroscience studies, it was found that while neurons that exhibit push-pull inhibition achieve similar activations to preferred stimuli when compared to neurons that do not exhibit such inhibition, the fact that their activations are suppressed in noisy areas result in a more pronounced distinction between patterns of interest from others and in a sparser representation (Li et al., 2012). These may be considered as two desirable properties in machine learning models. In Figure 4, we show the histograms of the weights learned in convolutional and push-pull layers of several models. The push-pull layer acts as a regularizer and allows to learn generally sparser representations. We conjecture that one effect of the push-pull layer is the learning of smoother discriminant mapping functions due to sparser representations. This contributes to the increased generalization showed on corrupted data (Srivastava et al., 2014). While the resulting models may fit slightly less than models without push-pull inhibition on training data, they generalize better to unknown corruptions and perturbations.
The push-pull layer contributes to the improvement of the robustness of a particular architecture to input corruptions, with negligible reduction of classification accuracy on the original data. When classification systems are deployed in real-world tasks, where the acquisition of data is subject to corruptions due to sensor uncertainty/deterioration or to changing environments, it is desirable to provide them with mechanisms that can effectively deal with such degradation. From a computational cost point of view, during the training stage, the pull kernels are derived at each iteration by upsampling and negating the corresponding push kernels, which relatively slows down the training process. On a Nvidia V100 gpu, one training iteration with a batch of 256 ImageNet images takes 0.78, 0.85 and 0.95 seconds for ResNet50, ResNet50-pp and ResNet50-b1, respectively. In the testing stage, the pull kernels are loaded only once when the model is initiated. Extra computations are due only to the processing of the pull component. When the push-pull layer is used only in the first layer of the network or in the first block, the difference in computations is negligible. For inference, the processing takes 0.66, 0.67 and 0.63 seconds, respectively. The lowest test time of ResNet50-b1 is attributable to the sparser representation learned in the push-pull layer.
6 CONCLUSIONS
We demonstrated that the inclusion of a response inhibition mechanism, inspired by the push-pull neurons of the visual system of the brain, into existing ConvNets improves their robustness to corruptions and perturbations of the input that are not present in the training data. The improvement is mainly attributable to the filters in the push-pull layer that are less sensitive to noise, and have a regularization effect that makes the network learn a sparser representation. We carried out experiments on the CIFAR-C/P and ImageNet-C/P data sets and the results that we achieved (more than 10% of reduction of the error on corrupted and perturbed data, and 1% of reduction of the corruption error for fine-tuned networks) demonstrate the effectiveness of the push-pull inhibition layer.
A PUSH-PULL RESIDUAL AND DENSE LAYERS
In Figure 5, we show the structure of the composite residual and dense layers augmented with the push-pull inhibition. The architecture of the layers is the same of the original residual and dense layers, with the difference that all convolutions are replaced by push-pull filters.
B PERFORMANCE METRICS
We denote byEM the classification error of a modelM on the original clean (without perturbations) test set. Let us denote by EMc,s the corruption error achieved by the model M on images with corruption c ∈ C and severity level s, and denote by EMc the average error achieved across all severity sets of a given corruption c.
We compute the mean corruption error mCE as the average of the normalized errors achieved on all corruptions and severity levels. We use the error Ebaselinec,s achieved by a baseline network on the set of data with corruption c and severity s as a normalization factor for the corruption error:
mCEM = 1
|C| |C|∑ c=1 ∑5 s=1E M c,s∑5 s=1E baseline c,s
(2)
As proposed by Hendrycks & Dietterich (2019), we also evaluate the degradation of the classifier performance on corrupted data with respect to the results achieved on clean data and compute the relative mean corruption error rCE. It measures the relative gap between the error on clean and corrupted data and it is computed as:
rCEM = 1
|C| |C|∑ c=1
∑5 s=1 ( EMc,s − EM )∑5 s=1 ( Ebaselinec,s − Ebaseline
) (3) As a measure of performance on the CIFAR-P and ImageNet-P test sets, we compute the flip probability FP . It measures the probability that the outcome of the classification changes between consecutive frames of a sequence S due to a perturbation p. Formally, it is defined as:
FPMp = Px∼S(f(j) 6= f(xj−1)) (4) where xj is the j-th frame of the perturbation sequence. Similarly to the mCE, the flip rate FRMp = FP M p /FP baseline p is the normalized version of the FP .
C DETAILED RESULTS
C.1 RESULTS PER CORRUPTION TYPE
In Table 4, we show the detailed results achieved on the corruption sets in the CIFAR-C data set by network with and without push-pull inhibition. We also report the results achieved by networks that deploy the anti-aliased sampling filters of Zhang (2019). The push-pull filters are robust to high-frequency corruptions and can be effectively combined with anti-aliasing filters.
In Table 5, we report detailed results per corruption type achieved by WideResNet models for which the convolutions in the first layer and in the first residual block were replaced by push-pull filters, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor,
2017) and AutoAugment (Cubuk et al., 2018) techniques. The use of the push-pull filters can be combined with data augmentation to further improve the robustness of existing models.
In Table 6, we report the detailed classification error achieved per corruption type on ImageNet-C by models that deploy the push-pull layers in comparison with those achieved by models that use the antialiased sampling module proposed by Zhang (2019). The push-pull layers provide robustness for high-frequency corruptions, that is more evident in the case of the ResNet-50 models.
C.2 RESULTS PER PERTURBATION TYPE
We report in Table 7 the detailed performance results, in terms of flip probability, per perturbation type achieved on the CIFAR-P data set by ResNet models, of different depth, that deploy the pushpull layers, the anti-aliasing layers or a combination of them. The flip probability measures the likelihood that a model changes its predictions on consecutive frames of a video on which an incremental corruption is applied. The push-pull inhibition layers contribute to a substantial improvement of stability against high-frequency components, and benefit from the effect of the anti-aliasing filters on the translate perturbation.
In Table 8 we report detailed flip probability results per perturbation type achieved by WideResNet models that deploy push-pull filters in the first layer and in the first residual block to replace the original convolutions, and that are trained using different types of data augmentation, namely the cutout (Devries & Taylor, 2017) and AutoAugment (Cubuk et al., 2018) techniques. The push-pull inhibition and the different data augmentation techniques provide complementary contributions to the improvement of the model robustness and stability to perturbations. This demonstrates that the push-pull layer can be deployed in existing architectures and can be combined with other approaches to further improve the classification performance.
In Table 9 we report the detailed flip probability results achieved by ResNet models augmented with push-pull inhibition filters in comparison to those achieved by models that deploy anti-aliasing layers. The push-pull filters, especially in the case of ResNet-50, contribute to the improvement of the stability of the network prediction on consecutively perturbed frames and are particularly effective on high-frequency corruptions. | 1. What is the main contribution of the paper, and how does it relate to neurophysiological observations?
2. What are the strengths of the proposed approach, particularly in terms of robustness and weight sparsity?
3. What are the concerns regarding the evaluation of the models with inhibition compared to the baselines?
4. How would one design a fair baseline model that has the same receptive field sizes as the push-pull network while having the same number of parameters?
5. Are there any additional experimental results or arguments that could address the concern about the interpretation of the results?
6. Do the convolutions in the push-pull modules have biases, and how is this dealt with?
7. Why was a factor of 2 used for the difference in receptive field between the push and pull?
8. Is there biological evidence that the weights of the push and pull units should be tied, and would learning both units independently be better?
9. How do the number of learnable weights and the replacement of convolutions with push-pull modules compare between the baseline and PP models? | Review | Review
Summary
This paper takes inspiration from neurophysiological observations of the presence of push-pull cells in the brain. These cells act as non-linear band-pass filters where the excitatory response of one cell is combined with the inhibitory response of another with a larger receptive field. In the paper, a model of such push-pull layers is used to replace the convolutions in a number of near state-of-the-art models including ResNet and DenseNet variants. The authors' demonstrate that their networks using inhibition are more robust to various noises in the input than the baseline models. Further experiments demonstrate that the performance can be further enhanced by incorporating other recent advancements in network design and data augmentation. In addition it is observed that the proposed technique has a secondary effect of increasing weight sparsity in the network.
Positives
I like the idea behind this paper --- it's good to see an observation being taken from neuroscience and incorporated into a network model.
Overall (with a caveat about my main concern below regarding the interpretation of the results), the evaluations are well thought through, utilising a range of near state of the art models with appropriate datasets and metrics.
Aside from a slight lack of detail on the model (see questions below), which is easily corrected, the paper itself reads very well.
The proposed approach might actually be particularly useful not because of robustness, but because of the sparsity it induces.
Concerns
I have a major concern around the fairness of the evaluation of the models with inhibition against the baselines; it appears to me that each push-pull module has twice the receptive field size of the convolution it replaces. In a deep model the effective receptive field size (in terms the size of the set of pixels in the input image that influences a single pixel in a feature map) will be greatly compounded by this change.
It isn't at all clear to me that larger effective receptive field size would not be the greatest contributor to increased robustness --- indeed, one would expect that by taking into account more samples from the input would naturally lead to higher noise robustness. Note also that including the anti-aliasing filters would further increase the effective RF size, which would also explain the results that are presented.
In order to address this question one would probably have to make baseline that had the same receptive field sizes as the push-pull network, whilst having the same number of parameters. Probably this would mean learning small kernels and performing upsampling in the same way the pull layer does. I could believe that the push-pull mechanism itself might introduce an inductive bias that gives the network an improvement over the 'fair' baseline, although I would want to see experimental evidence for this.
Rationale for score
I'm leaning towards rejecting this because whilst I like the idea presented, I strongly believe there is an alternative interpretation of where the model's gains in performance arise from. This issue clearly needs addressing with experimental results before it should be accepted.
Questions during rebuttal period
Obviously it would be nice if my concern regarding the fairness could be addressed --- perhaps the authors already have evidence at hand to show that increased RF size is not the contributing factor to the results? Or perhaps they have a convincing argument as to why RF size is not an issue? If not, I suspect it is probably unreasonable to ask that the authors do additional experiments during the rebuttal period as I would expect it would take a substantial amount of time.
I do have some additional (small!) questions that I couldn't find covered in the paper:
Do the convolutions in the push-pull modules have biases? If they do, how is this dealt with?
Why is a factor of 2 used for the difference in receptive field between the push and pull? (I get that there is evidence in biology for them being different, but why exactly 2?)
Is there biological evidence that the weights of the push and pull units should be tied? Increased weights notwithstanding, would it be better to learn both units independently?
If a baseline model has a convolution with K outputs, does this get replaced with a push-pull module with K outputs? Does this also mean that the number of learnable weights is equivalent between the baselines and PP models? |
ICLR | Title
Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations
Abstract
Adversarial patch is one kind of important form to perform adversarial attacks in the real world and brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the position on the image or manipulating the position while fixing the content of the patch. In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting. We adjust the transferability by taking the position, weights of surrogate models in the ensemble attack and the attack step size as parameters, and utilize the reinforcement learning framework to simultaneously solve these parameters based on the reward information obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and the results on four representative FR models demonstrate that our method can significantly improve the attack success rate and the query efficiency. Besides, experiments on the commercial FR service and physical environments confirm the practical application value of our method.
1 INTRODUCTION
Deep neural networks (DNNs) have exposed their vulnerability to adversarial attacks, where adding small imperceptible perturbations to the image can confuse their predictions. However, this small perturbation is not suitable for real applications where images are captured through the camera. A more available way is to use a local patch-like perturbation, where the magnitude of the pixel value changes is not restricted. By printing out the adversarial patch and pasting it on the object, the attack in the real scene can be realized, and it has brought security threats to many tasks such as person detection (Xu et al., 2020), traffic sign recognition (Eykholt et al., 2018; Liu et al., 2019), and image classification (Karmon et al., 2018).
Among these tasks, Face Recognition (FR) is more safety-critical, and adversarial patches have also been successfully applied in this area (Yang et al., 2020; Komkov & Petiushko, 2019; Sharif et al., 2019; Guo et al., 2021). For example, adv-hat (Komkov & Petiushko, 2019) and adv-patch (Pautov et al., 2019; Yang et al., 2020) put the patch generated based on the gradient information on the forehead or nose to achieve attacks. Adv-glasses (Sharif et al., 2016; 2019) confuse the FR system by placing a printed perturbed eyeglass frame at the eye. The above methods mainly focus on optimizing the perturbation content, and the pasting position of the patch is fixed on a position selected based on experience or some prior knowledge. On the other hand, adv-sticker (Guo et al., 2021) adopts a predefined meaningful adversarial patch but uses an evolutionary algorithm to search for a good patch’s position to perform the attack. The above methods enlighten us that if we optimize the position and perturbations at the same time, better attack performance can be achieved. However, due to the strong coupling relationship between the position and perturbation values, it is difficult to use the simple gradient-based method or alternate iterative optimization to solve them simultaneously, leaving a challenging problem unsolved.
In practical applications, the detailed information about FR threat model usually cannot be accessed. In such case, the existence of transferability of adversarial examples will play an important role. If the adversarial patch generated on the surrogate model has good transferability on the target model, it will
destroy the robustness of the model in an easy-to-implement way, and bring serious consequences. Therefore, an in-depth study on improving the transferability of the adversarial patch can greatly help test the robustness of the FR model in real-world applications.
Based on the above considerations, in this paper, we propose a method to simultaneously optimize the position and perturbations of the adversarial patch to improve the black-box attack performance with limited information. However, directly optimizing them will lead to a large number of queries, especially for the perturbations. Because each pixel can range from 0 to 255, the perturbation searching space is so huge. To tackle this issue, we instead use improved I-FGSM with high transferability (Dong et al., 2018; 2019b; Wang et al., 2021) based on the ensemble of surrogate models (Liu et al., 2016) to generate patch’s perturbations, and then adjust the attack step size and the weight of each surrogate model to achieve the perturbations. Compared with directly optimizing the perturbation value for each pixel, changing the step size and weights can greatly reduce the parameter space and improve the solving efficiency. Based on this setting, the patch’s position, surrogate models’ weights and attack step size are the final key parameters needing to learn.
Naturally, simply using some predefined parameters will result in poor transferability, and thus cannot make the adversarial patch achieve satisfactory adaptability to the target model. To further improve the transferability, we use a small number of queries on the target model to dynamically adjust the attack parameters, and this process can be formulated into the Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the target model, and an Unet-based (Ronneberger et al., 2015) network is used as the agent to simultaneously predict the parameters (actions) to generate better adversarial patches. By interacting with the target model, the agent can obtain a reward signal feedback to guide its learning by maximizing this reward (Li, 2017). The whole scheme is illustrated in Figure 1.
In summary, this paper has the following contributions:
• We argue that the position and perturbations of adversarial patch are equally important and interact each other closely. Therefore, a method is proposed to simultaneously optimize them to generate transferable adversarial patches, rather than alternate iterative optimization.
• We dynamically adjust the parameters in the transfer-based attack through a small number of queries and formulate the process into a Reinforcement Learning framework, which can guide the agent to generate better parameters with high query efficiency.
• Extensive experiments in dodging and impersonation tasks confirm that our method can realize the state-of-the-art attack performance and query efficiency (maximum success rate of 96.65% with only 11 queries on average), and experiments in the commercial API and physical environment prove the good application value of our method.
2 RELATED WORKS
2.1 ADVERSARIAL PATCH
Compared with Lp norm based adversarial perturbations, adversarial patch (Brown et al., 2017) is a more suitable attack form for real-world applications where objects need to be captured by a camera. Different from pixel-wise imperceptible perturbation, adversarial patch does not restrict the
perturbations’ magnitude. Up to now, adversarial patch has been applied to image classifiers (Jia et al., 2020; Karmon et al., 2018), person detectors (Xu et al., 2020), traffic sign detectors (Eykholt et al., 2018; Liu et al., 2019) and many other security-critical systems. For example, the adversarial T-shirt (Xu et al., 2020) evades the person detector by printing the perturbation patch generated by the gradient information in the optimization framework on the center of the T-shirt. Recently, Rao et al. (2020) also propose to jointly optimize location and content of adversarial patch. However, our method is different from it for two aspects: (1) They optimize the location and content via an alternate iterative manner, while our method simultaneously solves them. Because location and content have strong interactions, simultaneously optimizing them is more reasonable and thus can achieve better solution. (2) They belong to white-box attack against image classification task, while our method is a black-box attack versus face recognition task. Therefore, our task is more challenging.
2.2 ADVERSARIAL PATCH IN THE FACE RECOGNITION
Adversarial patches also bring risks to face recognition and detection tasks, and their attack forms can be roughly divided into two categories. On the one hand, some methods fix the patch on a specific position of the face selected based on the experience or prior knowledge, and then generate the perturbations of the patch. For example, adversarial hat (Komkov & Petiushko, 2019), adv-patch (Pautov et al., 2019) and adversarial glasses (Sharif et al., 2016; 2019) are classical methods against face recognition models which are realized by placing perturbation stickers on the forehead or nose, or putting the perturbation eyeglasses on the eyes. Yang et al. (2020) put universal adversarial patches on the fixed position of the face to prevent the face detectors from detecting the real faces. The main concern of these methods is to mainly focus on generating available adversarial perturbation patterns but without much consideration of the impact of patch’s position versus the attack performance.
On the other hand, some methods fix the content of the adversarial patch and search for the optimal pasting position within the valid pasting area of the face. Adv-sticker (Guo et al., 2021) uses patternfixed stickers existing in the real life, and changes their positions through RHDE algorithm based on the idea of differential evolution to attack FR systems. Inspired by this, we believe that the position and perturbations of the adversarial patch are equally important to attack the face recognition system, and if the two are optimized simultaneously, the attack performance can be further improved.
2.3 DEEP REINFORCEMENT LEARNING
Deep reinforcement learning (DRL) combines the perception ability of deep learning with the decision-making ability of reinforcement learning, so that the agent can make appropriate behaviors through the interaction with the environment (Li, 2017; Dong et al., 2019a). It receives the reward signal to evaluate the performance of an action taken through the agent without any supervisory information, and can be used to solve multiple tasks such as parameter optimization and computer vision (Li, 2017). In this paper, we use the information obtained by querying the target model to solve the attack parameters to generate transferable adversarial examples, which can be formalized as the process of using reward signals to guide the agent’s learning in the reinforcement learning. Therefore, an agent based on Unet (Ronneberger et al., 2015) is designed to learn parameter selection policies, and generate better attack parameters under the reward signals obtained by querying the target model.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
In the face recognition task, given a clean face image x, the goal of the adversarial attack is to make the face recognition model predict a wrong identity of the perturbed face image xadv . Formally, the perturbed face with the adversarial patch can be formulated as Eq. (1), where is Hadamard product and x̃ is the adversarial patch with perturbations across the whole face image. A is a mask matrix to constrain the shape and pasting position of the patch, where the value of the pasting area is 1.
xadv = (1−A) x+A x̃ (1)
The previous methods either optimize x̃ with pre-fixed A, or fix x̃ to select the optimal A. In our method, we optimize A and x̃ simultaneously to further improve the attack performance. For the optimization of the mask matrix A, we fix the shape and the size (sh, sw) of the adversarial patch, and change its upper-left coordinates (cx, cy) to adjust the mask matrix, and the corresponding mask is defined as Ac. In order not to interfere with liveness detection module, we limit the pasting
position to the area that does not cover the facial features like eyes, mouth, etc. For the detailed process, please refer to Appendix A. Thus, in our method, the adversarial patch can be expressed as:
xadv = (1−Ac) x+Ac x̃ (2) To generate the perturbation content, the ensemble attack (Liu et al., 2016) based on MI-FGSM (Dong et al., 2018) is here used to generate the transferable patch. For the ensemble attack of n surrogate models, let ρi denote the weight of each surrogate model fi and denote the attack step size. Then taking the un-targeted attack (or dodging in the face recognition case) as an example, given the ground truth identity y, let fi(x, y) denote the confidence score that the model predicts a face image x as identity y, then x̃ can be computed by an iteration way. Let t denote the t-th iteration, then:
xadv = x̃t+1 = (1−Ac) x+Ac (x̃t + · sign (gt+1)) (3)
gt+1 = µ · gt + ∇xL (x̃t, y) ‖∇xL (x̃t, y)‖1
(4)
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) (5)
where ` (x̃t, y, fi) = 1− fi (x̃t, y) and ∑n i=1 ρi = 1. For the targeted attack (or impersonation in the face recognition case), given the target identity ŷ, ` can be simply replaced by fi (x̃t, ŷ).
Our attack goal is to simultaneously optimize both the patch position and the perturbation to generate good transferable adversarial patches to attack the target model. Therefore, the mask Ac, the attack step size in Eq.(3) and weights ρi in Eq.(5) are set as the learned parameters. To make the parameters more suitable for the target model, we adjust the parameters dynamically through a small number of queries on the target model. The details of solving these parameters are shown in Sec. 3.2.
3.2 ATTACKS BASED ON RL
3.2.1 FORMULATION OVERVIEW USING RL
The generation of the transferable adversarial patch on the surrogate model is guided by the information returned by querying the target model, and this process can be represented as the learning of the agent through the reward signal obtained by interacting with the environment in the reinforcement learning (Li, 2017). So an agent is constructed to learn the selection policy of the attack parameters.
In our method, the parameter values are defined as the actions generated by the agent under the guidance of the policy π, and at denotes the t-th action (i.e., the value of t-th parameter). The image feature input to the agent is defined as the state s, and the environment is the threat model F (·). The policy function πθ(a |s) with parameters θ is a rule used by the agent to decide what action to take, which can be formulated as a probability distribution of action a in the state s.
The reward reflects the performance of the currently generated adversarial patch on the target model, and the training goal of the agent is to learn good policies to maximize the reward signal. In face recognition, the goal of the dodging attack is to generate images that are as far away as possible from the ground-truth identity, while impersonation attacks want to generate images that are as similar as possible to the target identity. Thus, the reward function R is formalized as:
R =
{ ` ( xadv, y, F ) = 1− F ( xadv, y ) if dodging
` ( xadv, ŷ, F ) = F ( xadv, ŷ ) if impersonation (6)
In iterative training, the agent firstly predicts a set of parameters according to policy π, and then the adversarial patch based on the predicted parameters is generated. Finally the generated adversarial face image is input to the threat model to obtain the reward value. In this process, policy gradient algorithm (Sutton et al., 1999) is used to guide the agent’s policy update. After multiple pieces of training, the agent will generate actions that perform well on the threat model with a high probability.
3.2.2 DESIGN OF THE AGENT
The agent needs to learn the policies of the position, weights and attack step size. Considering the simultaneous solution of the position and other parameters, the design of the agent uses the U-net (Ronneberger et al., 2015) based structure, which can output the feature map of the same length and width as the image matrix fed into the network. Let the number of surrogate models be n, we design
the agent to output the feature map with n channels and the same length h and width w as the input image (i.e. the size is n× h× w). In each channel Mi(i = 1, . . . , n) of the feature map M , the relative value of each pixel point represents the importance of each position for the surrogate model fi, and the overall value of the channel reflects the importance of the corresponding surrogate model. We believe that the patch requires different attack strengths in different locations, so at the top layer of the agent network, a fully connected layer is used to map the feature map M to a vector V representing different attack step values. The structural details of the agent are shown in Figure 2.
Specifically, for the position action, the optional range of positions is discrete, so the position policy π1θ is designed to follow Categorical distribution (Murphy, 2012). Given the probability Pposition of each selected position, the position parameters (cx, cy) ∼ Cat (Pposition ), and Pposition is computed:
Pposition = 1
n n∑ i=1 softmax (Mi) (7)
For the weight actions, the weight ratio of the loss on each surrogate model fi to the ensemble loss is a continuous value, and we set the weight policy π2θ to follow Gaussian distribution (Murphy, 2012). So the i-th weight parameter ρi ∼ N ( µfi , σ 2 ) (i=1, . . . , n), and µfi is calculated as:
µfi = softmax ( M1,M2, . . . ,Mn ) i
(8)
where Mi refers to the mean value of the i-th channel in the feature map, and σ is a hyperparameter. In the actual sampling, we use the clipping operation to make the sum of weights equal to 1.
For the attack step action, we set 20 values in the range of 0.01 to 0.2 at intervals of 0.01, and adopt Categorical distribution (Murphy, 2012) as the step size policy π3θ due to the discreteness of the values. So the step size parameter ∼ Cat (pstep), and probability pstep of each candidate value is:
pstep = softmax (FC (Pposition)) (9)
By sampling from the corresponding distribution, we can obtain (cx, cy), ρi(i = 1, ..., n) and .
3.2.3 POLICY UPDATE
In the agent training, the goal is to make the agent hθ learn a good policy πθ with parameters θ to maximize the expectation of the reward R. Assume that there are T attack parameters to be solved, and τ = (s, a1, a2, . . . , aT ) is a decision result, then the optimal policy parameters θ∗ can be formulated as:
θ∗ = argmax θ J(θ) = argmax θ Eτ∼πθ(a|s)[R(τ)] (10)
We use the policy gradient Sutton et al. (1999) method to solve θ∗ by the gradient ascent method, and follow the REINFORCE algorithm (Williams, 1992), using the average value of N sampling of the policy function distribution to approximate the policy gradient∇θJ(θ):
∇θJ(θ) = Eτ∼πθ(a|s) [ T∑ t=1 ∇θ log πθ (at |s)R(τ) ] ≈ 1 N N∑ n=1 T∑ t=1 ∇θ log πθ (at |s)Rn (11)
Algorithm 1 Simultaneous optimization for position and perturbations of adversarial patch Input: Face image x, target ŷ, agent hθ, threat model F (·), n surrogate models fi(i=1, . . . , n),
iterations Z in MI-FGSM, sample times N , patch size sh, sw, learning rate α, training epoch K. Output: xadv
1: for k = 1 to K do 2: feature map M ← hθ; policies π1θ , π2θ , π3θ ← according to Eq. (7), Eq. (8), Eq. (9) ; 3: for i = 1 to N do 4: actions (cx, cy, ρ1, ρ2, . . . , ρn, )← sampling from π1θ , π2θ , π3θ ; A ← (cx, cy) and sh, sw ; 5: for t = 1 to Z do 6: gt ← according to Eq. (5) and Eq. (4) ; x̃t← according to Eq. (3); 7: end for 8: Update Ri← according to Eq. (6); 9: end for
10: ∇θJ(θ)← according to Eq. (11) ; θ← θ + α · ∇θJ(θ) ; 11: if F ( xadv ) = ŷ then break; 12: end for 13: return xadv
where Rn is the reward in the n-th sampling. When updating the policy with the parameter θ, the reward R can be regarded as the step size. The larger the reward, the larger the step size. If the reward is negative, it will go to the opposite direction. In this way, the agent can learn good policy functions with the update of θ in the direction of increasing the reward.
For actions that follow the Categorical policy (i.e., the position parameter and attack step size), given the probability vector p (i.e., Pposition and pstep in Eq. (7) and Eq. (9)), let p(a) denote the probability of action a in the probability vector p, then for π1θ and π 3 θ , ∇θ log πθ(a | s) in Eq. (11) can be calculated as:
∇θ log πθ(a | s) = d log p(a)
dθ (12)
For actions that follow the Gaussian policy distribution (i.e. the weights of the surrogate models), the mean value µf of Gaussian distribution is calculated by the output of the agent, so µf can be expressed as hθ(s) = µf . Therefore, for π2θ following the Gaussian policy, ∇θ log πθ(a | s) in Eq. (11) can be calculated as follows:
∇θ log πθ(a | s) = ∇θ [ − (a− hθ(s)) 2
2σ2 − log(σ)− log(
√ 2π) ] = a− hθ(s)
σ2 · dh dθ
(13)
3.3 OVERALL FRAMEWORK
The complete process of our method to simultaneously optimize the position and perturbation is given in Algorithm 1. In the K iterations of the agent training, policy functions are firstly calculated according to the output of the agent, and then perform N sampling according to the probability distribution of the policy function to generate N sets of parameters. According to each set of parameters, the attack is conducted on surrogate models and the generated transferable adversarial examples are used to query the target model to obtain the reward, and then the policy is updated. During this process, if a successful attack has been achieved, the iteration is stopped early.
Furthermore, our simultaneous optimization method can also be combined with other existing blackbox attack methods like gradient estimation to further enhance the attack performance. Detailed combination process and experimental results are shown in Appendix B.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Target models: We choose five face recognition models as threat models, including four representative open-source face recognition models (i.e. FaceNet (Schroff et al., 2015), CosFace50 (Wang et al., 2018), ArcFace34 and ArcFace50 (Deng et al., 2019)) and one commercial face recognition API 1. During the attack, the four open-source models are candidates for surrogate models, and each model and their ensemble version that excludes the target model will be used as surrogate models.
1https://intl.cloud.tencent.com/product/facerecognition
Face database: We randomly select 5,752 different people from Labeled Faces in the Wild (LFW)2 and CelebFaces Attribute (CelebA)3 to construct the face database. We use the above models to extract the face features, and then calculate the cosine similarity with all identities in the face database to perform the 1-to-N identification.
Metrics: Two metrics, attack success rate (ASR) and the number of queries (NQ) are used to evaluate the performance of attacks. ASR refers to the proportion of images that are successfully attacked in all test face images, where we ensure that the clean test images selected in the experiment can be correctly identified. NQ refers to the number of queries to the target model required by the adversarial patch that can achieve a successful attack.
Implementation: The size of the adversarial patch is set as sh = 25, sw = 30, the number of sampling N in the policy gradient method is set to 5, and the variance σ of the Gaussian policy is equal to 0.01. Other parameters are set as Z = 150, α = 0.001, and K = 50.
4.2 EXPERIMENTAL RESULTS
4.2.1 PERFORMANCE OF SIMULTANEOUS OPTIMIZATION
We first evaluate our simultaneous optimization method qualitatively and quantitatively according to the setting in Section 4.1. Table 1 shows the quantitative results of dodging and impersonation attacks under the black-box setting against the five target models. For surrogate models, we explore the relationships between individual models and the model ensemble using single models and their ensemble version excluding target models, respectively. Figure 3 shows some visual examples of the position and pattern at different stages of the attack. More visual results are shown in Appendix C.
From the results in Table 1, we can see : (1) the proposed method achieves good attack success rates and query efficiency under both two kinds of attacks. The dodging attack achieves the highest success rate of 96.65% under 11 queries, while the impersonation attack achieves the highest success rate of 72.83% under 27 queries. (2) The performance of using ensemble models to attack is better than that using a single model as the surrogate model, which shows that joint optimization can adaptively adjust
2http://vis-www.cs.umass.edu/lfw/ 3http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Figure 4: The weights of each surrogate model in the ensemble attack using simultaneous optimization.
Figure 5: ASR and the NQ (shown in brackets) when using Position (P), Position and weights (P+W), and Position, weights and step size (P+W+S) respectively.
the weights of different surrogate models to achieve the best performance. (3) For the relationship between the surrogate model and the target model, when the two are structurally similar, it is more likely to help the attack. For example, for ArcFace50, CosFace50 has a greater influence on it. This is because the backbone of ArcFace50 and CosFace50 are both ResNet50-IR (Deng et al., 2019), and the backbone of FaceNet and ArcFace34 are Inception-ResNet-v1 (Szegedy et al., 2017) and ResNet34-IR (Deng et al., 2019) respectively. For the influence of model training loss, Facenet uses Euclidean (Schroff et al., 2015) rather than cosine space metric, which is less helpful to ArcFace and CosFace using angular space loss metric when used as a surrogate model alone, with ASR of dodging only 16.89% and 11.47%, respectively. Therefore, in ensemble attacks, we can dynamically adjust the importance of different models by optimizing their weights. (4) On the commercial API, performance drops slightly compared to the open-source model, but remains at an acceptable level. This is because commercial FR services introduce some defense measures like image compression.
4.2.2 COMPARISONS WITH SOTA METHODS
To prove the superiority, we compare our method with three methods: simply relying on the transferability, fixing the perturbation and only changing the position, and randomly initializing the position and only optimizing the perturbation. Specifically, for the method relying solely on the transferability, we use TI-MI-FGSM (Dong et al., 2019b) to generate adversarial examples on the surrogate model, and then directly transfer them to the target model to test their performance. For the position-only method, we adopt the latest RHDE (Guo et al., 2021) method, which fixes the content of the patch and searches for a good position on the face to attack. For the perturbation-only method, we first randomly initialize the position, and then use ZO-AdaMM (Chen et al., 2019) to obtain the adversarial perturbation. The above results on three target models are shown in Table 2.
From the results, we can see that: (1) Although it is relatively simple to rely solely on transferability, the average attack success rate is 12.09% which is not satisfactory. (2) The average success rates for RHDE and ZO-AdaMM are 38.97% and 46.85%, respectively, while our method is 65.61%, which proves that compared with the methods optimized only for position or perturbation, our joint optimization can combine these two attack modes more effectively to achieve better attack. (3) Our method achieves optimal query efficiency among several methods, requiring only a few dozen queries.
4.2.3 ABLATION STUDY
In order to verify the effectiveness of each attack parameter used in our method, we also report the results when each component is added separately. First, we fix weights as equal values and the step size as 0.1 to test the performance when only changing the position parameter. Then, we add the weight parameters to learn the importance of each surrogate model. Finally, we add the step size parameters to carry out the overall learning. The results in Figure 5 show that learning only
Table 3: Success rate of dodging (D) and impersonation (I) attack in the physical environment.
Table 4: Success rate in the physical world when changing distance, lighting and face postures including frontal (0◦), yaw angle rotation ±25◦, ±45◦, and pitch angle rotation ±15◦.
the position parameter can achieve an average success rate of 69.80%. The performance is greatly improved after adding the weight, and further improved after adding the step size. All parameters contribute to the enhancement of the overall attack effect and the weight parameter has a greater impact on the results than the step size. In the process of adding components, the number of queries (shown in brackets) is basically maintained at the same level.
We also make statistics on the weights of each model using simultaneous optimization by taking all four models as surrogate models, and Figure 4 shows the results on a face image when attacking the four threat models. It can be seen that its own model has the highest weight, which indicates that our method can find surrogate models similar to the target model in learning, and the weight of the other three models presents a positive correlation with the ASR shown in Table 1.
4.2.4 ATTACKS IN THE PHYSICAL WORLD
In this section, we show the results of our adversarial patch in the physical environment. We first perform simulated successful attacks on different subjects in the digital environment, and then conduct experiments in the physical world. The technical details of the implementation are shown in Appendix D. We record the video when faces are moving within the range of 5◦ of the current posture, and count the frame proportion of successful physical attack when results are checked every 10 frames at a frame rate of 30fps within one minute as the attack success rate. Table 3 shows the results of the frontal face. To test the performance under various physical conditions, we further change the face posture, the distance from the camera and the illumination. For face postures, we take the conditions of the frontal face, yaw angle rotation of ±25◦ (the mean value at 25◦ and −25◦), ±45◦ and pitch angle rotation of ±15◦. These results are shown in Table 4. It can be seen from Table 3 that our method maintains high physical ASR (100.0% and 83.33%) in both dodging and impersonation attacks on FaceNet, and the results are also good on ArcFace34 and CosFace50. Although there is a slight decline in commercial API, the ratio of 33.16% and 25.23% of successful frames is enough to bring potential risks to commercial applications. In Table 4, when the pose changes in a small range (± y25◦, ± p15◦), ASR still maintains a high value. Even if the deflection angle is slightly larger (± y45◦), it can still maintain an average of 31.13%. The effect of distance is small, and when lighting is changed, the results are still at an acceptable level. When performing impersonation attacks under different conditions, there is a situation where the face is recognized as a false identity different from the true identity, but not the target identity, which leads to a slightly lower result to a certain extent. A set of visual results is shown in Figure 6.
5 CONCLUSION
In this paper, we proposed a method to achieve the simultaneous optimization of the position and content to create more transferable adversarial patches. The content was generated by ensemble model attack, and the position, model weights and attack step size are set as parameters. These parameters are dynamically adjusted through a small number of queries with the target model in a reinforcement learning framework to make the patch more suitable for the target model. Extensive experiments demonstrated that our method can effectively improve the attack success rate and the query efficiency, and experiments on commercial face recognition API and physical environments confirmed the practical and effective application value of our method. Besides, the proposed method can also be adapted to other applications, such as automatic driving, etc.
A OPTIONAL AREA OF THE PASTING POSITIONS
In this section, we give more details about changing the pasting position of the adversarial patch.
It is worth noting that in order not to interfere with the liveness detection module and to maintain the concealment of the attack method, the area that does not cover the facial features of the face (e.g. cheek and forehead) is regarded as the optional pasting area. In practical applications, the liveness detection module is often used in combination with the face recognition to confirm the real physiological characteristic of the object and exclude the form of replacing real faces with photos, masks, etc. (Ming et al., 2020). It is mainly based on the depth or texture characteristics of the face skin or the movements of the object (such as blinking and opening the mouth), so the patch cannot be pasted in the area that covers the facial features (such as eyes and mouth).
Specifically, we first use dlib library to extract 81 feature points of the face and determine the effective pasting region. Figure 7 shows some examples of the effective pasting area corresponding to the face. After calculating the probability of each position Pposition through the output of the agent (see Eq. (7) in the paper), we set the probabilities of the invalid positions to 0, and then sample the pasting position.
B INTEGRATION WITH GRADIENT ESTIMATION
Our simultaneous optimization method can also be combined with the gradient estimation (e.g., Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017), natural evolution strategy (Ilyas et al., 2018), random gradient estimation (Tu et al., 2019)). It can provide a good initial position and pattern for gradient estimation to further enhance the attack performance. Here we take Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017) as an example to describe the calculation method and the experiment.
B.1 TECHNICAL DETAILS
In the Zeroth-Order (ZO) optimization, to expand the optimization range, x is often replaced with 1+tanhφ
2 (Chen et al., 2017). Since the symmetric difference quotient (Lax & Terrell, 2014) is used at φ to add a small offset at φ in the gradient estimation process, the gradient of x at φ (i.e. ∇φx) can not be too small. Therefore,∇φx is calculated as follows:
∇φx = 1
2
( 1− tanh2(φ) ) (14)
Therefore, when combining, in addition to considering the attack target, it is also necessary to ensure that the gradient of the generated perturbation in the φ-space meets the above requirements. So the loss function L(·, ·) in Eq. (5) is modified as:
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) + β sh · sw sh∑ i=1 sw∑ j=1 ∇φijx (15)
where sh and sw represent the height and width of the perturbation patch pasted to the face, and β is the scale factor. Given the paste coordinates (cx, cy), then φij = arctanh ( 2 · x̃t(cy+i,cx+j) − 1 ) .
Through the simultaneous optimization, we obtain an adversarial patch with good position and perturbation. Next, we regard it as the initialization of the gradient estimation. Specifically, we fix the position and use the gradient estimation method to only refine the perturbation by querying the threat models. After several iterations, the transferability of the adversarial patch is further improved.
B.2 EXPERIMENTAL RESULTS
The results in Table 5 show the performance of the integration with gradient estimation. It can be seen that after the combination of gradient estimation, the results have been improved to some extent compared with Table 1, but the consumption of queries has increased accordingly. For example, the success rate of impersonation attacks on CosFace50 increases from 40.08% to 71.28%, but the number of queries increases from 77 to 1575. Therefore, in practical applications, it is necessary to weigh the importance of success rate and query efficiency to determine whether to combine the gradient estimation method.
In order to confirm that the improvement in ASR is due to the good initial value provided by our simultaneous optimization method for gradient estimation, rather than the superiority of the gradient estimation method itself, a comparison is made in Table 6. The results show that simultaneous optimization (SO) can achieve a good ASR with the help of only a few queries (lower than 40 queries) both in dodging and impersonation. When combined with Zeroth-Order (ZO) optimization, the ASR has been further improved (from 72.83% to 87.37% for impersonation), but introduces more query consumption. However, we can see that Random+ZO uses much more queries (1972 and 2874, respectively) but achieves much lower ASR (65.79% and 50.52%, respectively). This contrast proves that our SO method indeed provides a good initialization for the gradient estimation.
C MORE VISUAL RESULTS
In this section, we show more visual results of the simultaneous optimization (SO) attacks. Figure 8 shows four groups of visual results. For each group of three images, the first one represents the clean image, the second one represents the image after the attack, and the third one represents the image corresponding to the wrong identity in the face database.
The above results are obtained by ensuring that the patch is not pasted to the area that covers the facial features. We also conduct experiments when allowing patches to cover facial features in a small range (no more than 20 pixels), and the results are shown in Figure 9. Interestingly, we find that the generated pattern is similar to that of the facial features at the corresponding position, but its
shape has been changed. For example, in the last group of pictures in Figure 9, the adversarial patch is pasted on the left side eyebrow of the face, and the generated pattern is similar to the shape of the eyebrow, which changes the eyebrow’s shape, and may mislead the extraction of eyebrow features by the face recognition network. This inspires us to use some information of facial features (e.g. eyes, eyebrows, nose and mouth) to mislead the feature extraction process of the face recognition, so as to achieve better attack performance.
D IMPLEMENTATION DETAILS OF THE PHYSICAL ATTACK
To make digital simulation results better adapt to the physical environment, we process the smoothness of the pattern. Specifically, during each iteration of the pattern generation, we first obtain a pattern half the size of the original patch by scaling down or averaging pooling, and then enlarge the image back to the original size by bilinear interpolation. The reduced pattern retains the key information of the pattern. After zooming, the smoothness of the image is improved, avoiding the problem that the pattern generated per pixel is sensitive to the position point when it is transferred to the physical environment. Even the pasting position is slightly a few pixels away, the attack effect can still be preserved. We also try to use Total Variation (TV) (Sharif et al., 2019) loss to enhance the smoothness, but the actual effect is not as good as the scaling process. When printing, we use photo paper rather than ordinary paper as the patch material to recreate the colors of the digital simulation as realistically as possible. | 1. What are the main contributions and innovations of the paper regarding simultaneous patch optimization and transferability improvement?
2. How does the reviewer assess the technical contribution and engineering effort of the work?
3. What are the challenges in combining patch content and position optimization, and how did the authors address them?
4. How does the proposed approach compare to existing patch and location methods, both separately and combined?
5. What are the limitations of the current baseline comparisons, and how could they be improved?
6. How does the reviewer suggest improving the stealthiness of the patch, such as incorporating position-stealth or evaluating performance on different domains?
7. Is there a trade-off between joint optimization of patch content and position versus patch detection, and how might this be evaluated?
8. Are there any suggestions for improving the figures and visualizations in the paper? | Summary Of The Paper
Review | Summary Of The Paper
Focus is on 1. Simultaneously optimizing patch for content and position, and 2. Improve transferability of patch, so that if it performs well on reference model, it also performs well on target (black box) model. Both objectives are not related.
Review
Strengths:
Great writing quality.
Innovative use of RL.
Weaknesses:
Incremental work. Not sufficient for ICLR.
Low technical contribution.
Detailed Comments.
This work is incremental in nature. It combines finding the best patch (i.e. content), and finding the best location for the patch (i.e. position), using Reinforcement Learning. Could you mention what are the challenges to combining these two?
Although I think the framework has shown good engineering effort, the technical contribution of this paper is low. Could the authors mention the technical contributions of this work?
I would like to see an evaluation of this approach Vs. existing patch + existing location approach naively applied together. It would be interesting and worthwhile to see if the approach performs better than this baseline. I think the baseline considered currently are too weak, and don’t make a fair comparison.
I would also like to see a comparison of your approach with the alternate iterative optimization method that you have mentioned in your paper.
The figure 3 samples are not very typical for a stealthy patch. I think that when patches are placed directly on the face, the stealthiness of the attack is lost. I am also curious about the point mentioned in Section 3.1 that if you are in-fact limiting the pasting position to the area that does not cover the facial features, then this shouldn’t actually happen.
As an offshoot of the previous point, I think that choosing the mask is very important. It seems like the approach is greedily assigning un-stealthy positions to the patch (e.g. on the face), where it is sure to have maximum impact, but also sure to be least stealthy. Could your approach accommodate position-stealth?
A related point is that an evaluation of your attack on another domain (i.e. other than face recognition) could be worthwhile. For example, traffic signs.
I am also curious to know, what is the trade-off between joint optimization of patch content and position Vs. patch detection? Could you evaluate this? It would be interesting to learn whether the best content + position combination, although very effective, maybe equally easy to detect.
Figures could be improved (vector images, bigger fonts). |
ICLR | Title
Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations
Abstract
Adversarial patch is one kind of important form to perform adversarial attacks in the real world and brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the position on the image or manipulating the position while fixing the content of the patch. In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting. We adjust the transferability by taking the position, weights of surrogate models in the ensemble attack and the attack step size as parameters, and utilize the reinforcement learning framework to simultaneously solve these parameters based on the reward information obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and the results on four representative FR models demonstrate that our method can significantly improve the attack success rate and the query efficiency. Besides, experiments on the commercial FR service and physical environments confirm the practical application value of our method.
1 INTRODUCTION
Deep neural networks (DNNs) have exposed their vulnerability to adversarial attacks, where adding small imperceptible perturbations to the image can confuse their predictions. However, this small perturbation is not suitable for real applications where images are captured through the camera. A more available way is to use a local patch-like perturbation, where the magnitude of the pixel value changes is not restricted. By printing out the adversarial patch and pasting it on the object, the attack in the real scene can be realized, and it has brought security threats to many tasks such as person detection (Xu et al., 2020), traffic sign recognition (Eykholt et al., 2018; Liu et al., 2019), and image classification (Karmon et al., 2018).
Among these tasks, Face Recognition (FR) is more safety-critical, and adversarial patches have also been successfully applied in this area (Yang et al., 2020; Komkov & Petiushko, 2019; Sharif et al., 2019; Guo et al., 2021). For example, adv-hat (Komkov & Petiushko, 2019) and adv-patch (Pautov et al., 2019; Yang et al., 2020) put the patch generated based on the gradient information on the forehead or nose to achieve attacks. Adv-glasses (Sharif et al., 2016; 2019) confuse the FR system by placing a printed perturbed eyeglass frame at the eye. The above methods mainly focus on optimizing the perturbation content, and the pasting position of the patch is fixed on a position selected based on experience or some prior knowledge. On the other hand, adv-sticker (Guo et al., 2021) adopts a predefined meaningful adversarial patch but uses an evolutionary algorithm to search for a good patch’s position to perform the attack. The above methods enlighten us that if we optimize the position and perturbations at the same time, better attack performance can be achieved. However, due to the strong coupling relationship between the position and perturbation values, it is difficult to use the simple gradient-based method or alternate iterative optimization to solve them simultaneously, leaving a challenging problem unsolved.
In practical applications, the detailed information about FR threat model usually cannot be accessed. In such case, the existence of transferability of adversarial examples will play an important role. If the adversarial patch generated on the surrogate model has good transferability on the target model, it will
destroy the robustness of the model in an easy-to-implement way, and bring serious consequences. Therefore, an in-depth study on improving the transferability of the adversarial patch can greatly help test the robustness of the FR model in real-world applications.
Based on the above considerations, in this paper, we propose a method to simultaneously optimize the position and perturbations of the adversarial patch to improve the black-box attack performance with limited information. However, directly optimizing them will lead to a large number of queries, especially for the perturbations. Because each pixel can range from 0 to 255, the perturbation searching space is so huge. To tackle this issue, we instead use improved I-FGSM with high transferability (Dong et al., 2018; 2019b; Wang et al., 2021) based on the ensemble of surrogate models (Liu et al., 2016) to generate patch’s perturbations, and then adjust the attack step size and the weight of each surrogate model to achieve the perturbations. Compared with directly optimizing the perturbation value for each pixel, changing the step size and weights can greatly reduce the parameter space and improve the solving efficiency. Based on this setting, the patch’s position, surrogate models’ weights and attack step size are the final key parameters needing to learn.
Naturally, simply using some predefined parameters will result in poor transferability, and thus cannot make the adversarial patch achieve satisfactory adaptability to the target model. To further improve the transferability, we use a small number of queries on the target model to dynamically adjust the attack parameters, and this process can be formulated into the Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the target model, and an Unet-based (Ronneberger et al., 2015) network is used as the agent to simultaneously predict the parameters (actions) to generate better adversarial patches. By interacting with the target model, the agent can obtain a reward signal feedback to guide its learning by maximizing this reward (Li, 2017). The whole scheme is illustrated in Figure 1.
In summary, this paper has the following contributions:
• We argue that the position and perturbations of adversarial patch are equally important and interact each other closely. Therefore, a method is proposed to simultaneously optimize them to generate transferable adversarial patches, rather than alternate iterative optimization.
• We dynamically adjust the parameters in the transfer-based attack through a small number of queries and formulate the process into a Reinforcement Learning framework, which can guide the agent to generate better parameters with high query efficiency.
• Extensive experiments in dodging and impersonation tasks confirm that our method can realize the state-of-the-art attack performance and query efficiency (maximum success rate of 96.65% with only 11 queries on average), and experiments in the commercial API and physical environment prove the good application value of our method.
2 RELATED WORKS
2.1 ADVERSARIAL PATCH
Compared with Lp norm based adversarial perturbations, adversarial patch (Brown et al., 2017) is a more suitable attack form for real-world applications where objects need to be captured by a camera. Different from pixel-wise imperceptible perturbation, adversarial patch does not restrict the
perturbations’ magnitude. Up to now, adversarial patch has been applied to image classifiers (Jia et al., 2020; Karmon et al., 2018), person detectors (Xu et al., 2020), traffic sign detectors (Eykholt et al., 2018; Liu et al., 2019) and many other security-critical systems. For example, the adversarial T-shirt (Xu et al., 2020) evades the person detector by printing the perturbation patch generated by the gradient information in the optimization framework on the center of the T-shirt. Recently, Rao et al. (2020) also propose to jointly optimize location and content of adversarial patch. However, our method is different from it for two aspects: (1) They optimize the location and content via an alternate iterative manner, while our method simultaneously solves them. Because location and content have strong interactions, simultaneously optimizing them is more reasonable and thus can achieve better solution. (2) They belong to white-box attack against image classification task, while our method is a black-box attack versus face recognition task. Therefore, our task is more challenging.
2.2 ADVERSARIAL PATCH IN THE FACE RECOGNITION
Adversarial patches also bring risks to face recognition and detection tasks, and their attack forms can be roughly divided into two categories. On the one hand, some methods fix the patch on a specific position of the face selected based on the experience or prior knowledge, and then generate the perturbations of the patch. For example, adversarial hat (Komkov & Petiushko, 2019), adv-patch (Pautov et al., 2019) and adversarial glasses (Sharif et al., 2016; 2019) are classical methods against face recognition models which are realized by placing perturbation stickers on the forehead or nose, or putting the perturbation eyeglasses on the eyes. Yang et al. (2020) put universal adversarial patches on the fixed position of the face to prevent the face detectors from detecting the real faces. The main concern of these methods is to mainly focus on generating available adversarial perturbation patterns but without much consideration of the impact of patch’s position versus the attack performance.
On the other hand, some methods fix the content of the adversarial patch and search for the optimal pasting position within the valid pasting area of the face. Adv-sticker (Guo et al., 2021) uses patternfixed stickers existing in the real life, and changes their positions through RHDE algorithm based on the idea of differential evolution to attack FR systems. Inspired by this, we believe that the position and perturbations of the adversarial patch are equally important to attack the face recognition system, and if the two are optimized simultaneously, the attack performance can be further improved.
2.3 DEEP REINFORCEMENT LEARNING
Deep reinforcement learning (DRL) combines the perception ability of deep learning with the decision-making ability of reinforcement learning, so that the agent can make appropriate behaviors through the interaction with the environment (Li, 2017; Dong et al., 2019a). It receives the reward signal to evaluate the performance of an action taken through the agent without any supervisory information, and can be used to solve multiple tasks such as parameter optimization and computer vision (Li, 2017). In this paper, we use the information obtained by querying the target model to solve the attack parameters to generate transferable adversarial examples, which can be formalized as the process of using reward signals to guide the agent’s learning in the reinforcement learning. Therefore, an agent based on Unet (Ronneberger et al., 2015) is designed to learn parameter selection policies, and generate better attack parameters under the reward signals obtained by querying the target model.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
In the face recognition task, given a clean face image x, the goal of the adversarial attack is to make the face recognition model predict a wrong identity of the perturbed face image xadv . Formally, the perturbed face with the adversarial patch can be formulated as Eq. (1), where is Hadamard product and x̃ is the adversarial patch with perturbations across the whole face image. A is a mask matrix to constrain the shape and pasting position of the patch, where the value of the pasting area is 1.
xadv = (1−A) x+A x̃ (1)
The previous methods either optimize x̃ with pre-fixed A, or fix x̃ to select the optimal A. In our method, we optimize A and x̃ simultaneously to further improve the attack performance. For the optimization of the mask matrix A, we fix the shape and the size (sh, sw) of the adversarial patch, and change its upper-left coordinates (cx, cy) to adjust the mask matrix, and the corresponding mask is defined as Ac. In order not to interfere with liveness detection module, we limit the pasting
position to the area that does not cover the facial features like eyes, mouth, etc. For the detailed process, please refer to Appendix A. Thus, in our method, the adversarial patch can be expressed as:
xadv = (1−Ac) x+Ac x̃ (2) To generate the perturbation content, the ensemble attack (Liu et al., 2016) based on MI-FGSM (Dong et al., 2018) is here used to generate the transferable patch. For the ensemble attack of n surrogate models, let ρi denote the weight of each surrogate model fi and denote the attack step size. Then taking the un-targeted attack (or dodging in the face recognition case) as an example, given the ground truth identity y, let fi(x, y) denote the confidence score that the model predicts a face image x as identity y, then x̃ can be computed by an iteration way. Let t denote the t-th iteration, then:
xadv = x̃t+1 = (1−Ac) x+Ac (x̃t + · sign (gt+1)) (3)
gt+1 = µ · gt + ∇xL (x̃t, y) ‖∇xL (x̃t, y)‖1
(4)
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) (5)
where ` (x̃t, y, fi) = 1− fi (x̃t, y) and ∑n i=1 ρi = 1. For the targeted attack (or impersonation in the face recognition case), given the target identity ŷ, ` can be simply replaced by fi (x̃t, ŷ).
Our attack goal is to simultaneously optimize both the patch position and the perturbation to generate good transferable adversarial patches to attack the target model. Therefore, the mask Ac, the attack step size in Eq.(3) and weights ρi in Eq.(5) are set as the learned parameters. To make the parameters more suitable for the target model, we adjust the parameters dynamically through a small number of queries on the target model. The details of solving these parameters are shown in Sec. 3.2.
3.2 ATTACKS BASED ON RL
3.2.1 FORMULATION OVERVIEW USING RL
The generation of the transferable adversarial patch on the surrogate model is guided by the information returned by querying the target model, and this process can be represented as the learning of the agent through the reward signal obtained by interacting with the environment in the reinforcement learning (Li, 2017). So an agent is constructed to learn the selection policy of the attack parameters.
In our method, the parameter values are defined as the actions generated by the agent under the guidance of the policy π, and at denotes the t-th action (i.e., the value of t-th parameter). The image feature input to the agent is defined as the state s, and the environment is the threat model F (·). The policy function πθ(a |s) with parameters θ is a rule used by the agent to decide what action to take, which can be formulated as a probability distribution of action a in the state s.
The reward reflects the performance of the currently generated adversarial patch on the target model, and the training goal of the agent is to learn good policies to maximize the reward signal. In face recognition, the goal of the dodging attack is to generate images that are as far away as possible from the ground-truth identity, while impersonation attacks want to generate images that are as similar as possible to the target identity. Thus, the reward function R is formalized as:
R =
{ ` ( xadv, y, F ) = 1− F ( xadv, y ) if dodging
` ( xadv, ŷ, F ) = F ( xadv, ŷ ) if impersonation (6)
In iterative training, the agent firstly predicts a set of parameters according to policy π, and then the adversarial patch based on the predicted parameters is generated. Finally the generated adversarial face image is input to the threat model to obtain the reward value. In this process, policy gradient algorithm (Sutton et al., 1999) is used to guide the agent’s policy update. After multiple pieces of training, the agent will generate actions that perform well on the threat model with a high probability.
3.2.2 DESIGN OF THE AGENT
The agent needs to learn the policies of the position, weights and attack step size. Considering the simultaneous solution of the position and other parameters, the design of the agent uses the U-net (Ronneberger et al., 2015) based structure, which can output the feature map of the same length and width as the image matrix fed into the network. Let the number of surrogate models be n, we design
the agent to output the feature map with n channels and the same length h and width w as the input image (i.e. the size is n× h× w). In each channel Mi(i = 1, . . . , n) of the feature map M , the relative value of each pixel point represents the importance of each position for the surrogate model fi, and the overall value of the channel reflects the importance of the corresponding surrogate model. We believe that the patch requires different attack strengths in different locations, so at the top layer of the agent network, a fully connected layer is used to map the feature map M to a vector V representing different attack step values. The structural details of the agent are shown in Figure 2.
Specifically, for the position action, the optional range of positions is discrete, so the position policy π1θ is designed to follow Categorical distribution (Murphy, 2012). Given the probability Pposition of each selected position, the position parameters (cx, cy) ∼ Cat (Pposition ), and Pposition is computed:
Pposition = 1
n n∑ i=1 softmax (Mi) (7)
For the weight actions, the weight ratio of the loss on each surrogate model fi to the ensemble loss is a continuous value, and we set the weight policy π2θ to follow Gaussian distribution (Murphy, 2012). So the i-th weight parameter ρi ∼ N ( µfi , σ 2 ) (i=1, . . . , n), and µfi is calculated as:
µfi = softmax ( M1,M2, . . . ,Mn ) i
(8)
where Mi refers to the mean value of the i-th channel in the feature map, and σ is a hyperparameter. In the actual sampling, we use the clipping operation to make the sum of weights equal to 1.
For the attack step action, we set 20 values in the range of 0.01 to 0.2 at intervals of 0.01, and adopt Categorical distribution (Murphy, 2012) as the step size policy π3θ due to the discreteness of the values. So the step size parameter ∼ Cat (pstep), and probability pstep of each candidate value is:
pstep = softmax (FC (Pposition)) (9)
By sampling from the corresponding distribution, we can obtain (cx, cy), ρi(i = 1, ..., n) and .
3.2.3 POLICY UPDATE
In the agent training, the goal is to make the agent hθ learn a good policy πθ with parameters θ to maximize the expectation of the reward R. Assume that there are T attack parameters to be solved, and τ = (s, a1, a2, . . . , aT ) is a decision result, then the optimal policy parameters θ∗ can be formulated as:
θ∗ = argmax θ J(θ) = argmax θ Eτ∼πθ(a|s)[R(τ)] (10)
We use the policy gradient Sutton et al. (1999) method to solve θ∗ by the gradient ascent method, and follow the REINFORCE algorithm (Williams, 1992), using the average value of N sampling of the policy function distribution to approximate the policy gradient∇θJ(θ):
∇θJ(θ) = Eτ∼πθ(a|s) [ T∑ t=1 ∇θ log πθ (at |s)R(τ) ] ≈ 1 N N∑ n=1 T∑ t=1 ∇θ log πθ (at |s)Rn (11)
Algorithm 1 Simultaneous optimization for position and perturbations of adversarial patch Input: Face image x, target ŷ, agent hθ, threat model F (·), n surrogate models fi(i=1, . . . , n),
iterations Z in MI-FGSM, sample times N , patch size sh, sw, learning rate α, training epoch K. Output: xadv
1: for k = 1 to K do 2: feature map M ← hθ; policies π1θ , π2θ , π3θ ← according to Eq. (7), Eq. (8), Eq. (9) ; 3: for i = 1 to N do 4: actions (cx, cy, ρ1, ρ2, . . . , ρn, )← sampling from π1θ , π2θ , π3θ ; A ← (cx, cy) and sh, sw ; 5: for t = 1 to Z do 6: gt ← according to Eq. (5) and Eq. (4) ; x̃t← according to Eq. (3); 7: end for 8: Update Ri← according to Eq. (6); 9: end for
10: ∇θJ(θ)← according to Eq. (11) ; θ← θ + α · ∇θJ(θ) ; 11: if F ( xadv ) = ŷ then break; 12: end for 13: return xadv
where Rn is the reward in the n-th sampling. When updating the policy with the parameter θ, the reward R can be regarded as the step size. The larger the reward, the larger the step size. If the reward is negative, it will go to the opposite direction. In this way, the agent can learn good policy functions with the update of θ in the direction of increasing the reward.
For actions that follow the Categorical policy (i.e., the position parameter and attack step size), given the probability vector p (i.e., Pposition and pstep in Eq. (7) and Eq. (9)), let p(a) denote the probability of action a in the probability vector p, then for π1θ and π 3 θ , ∇θ log πθ(a | s) in Eq. (11) can be calculated as:
∇θ log πθ(a | s) = d log p(a)
dθ (12)
For actions that follow the Gaussian policy distribution (i.e. the weights of the surrogate models), the mean value µf of Gaussian distribution is calculated by the output of the agent, so µf can be expressed as hθ(s) = µf . Therefore, for π2θ following the Gaussian policy, ∇θ log πθ(a | s) in Eq. (11) can be calculated as follows:
∇θ log πθ(a | s) = ∇θ [ − (a− hθ(s)) 2
2σ2 − log(σ)− log(
√ 2π) ] = a− hθ(s)
σ2 · dh dθ
(13)
3.3 OVERALL FRAMEWORK
The complete process of our method to simultaneously optimize the position and perturbation is given in Algorithm 1. In the K iterations of the agent training, policy functions are firstly calculated according to the output of the agent, and then perform N sampling according to the probability distribution of the policy function to generate N sets of parameters. According to each set of parameters, the attack is conducted on surrogate models and the generated transferable adversarial examples are used to query the target model to obtain the reward, and then the policy is updated. During this process, if a successful attack has been achieved, the iteration is stopped early.
Furthermore, our simultaneous optimization method can also be combined with other existing blackbox attack methods like gradient estimation to further enhance the attack performance. Detailed combination process and experimental results are shown in Appendix B.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Target models: We choose five face recognition models as threat models, including four representative open-source face recognition models (i.e. FaceNet (Schroff et al., 2015), CosFace50 (Wang et al., 2018), ArcFace34 and ArcFace50 (Deng et al., 2019)) and one commercial face recognition API 1. During the attack, the four open-source models are candidates for surrogate models, and each model and their ensemble version that excludes the target model will be used as surrogate models.
1https://intl.cloud.tencent.com/product/facerecognition
Face database: We randomly select 5,752 different people from Labeled Faces in the Wild (LFW)2 and CelebFaces Attribute (CelebA)3 to construct the face database. We use the above models to extract the face features, and then calculate the cosine similarity with all identities in the face database to perform the 1-to-N identification.
Metrics: Two metrics, attack success rate (ASR) and the number of queries (NQ) are used to evaluate the performance of attacks. ASR refers to the proportion of images that are successfully attacked in all test face images, where we ensure that the clean test images selected in the experiment can be correctly identified. NQ refers to the number of queries to the target model required by the adversarial patch that can achieve a successful attack.
Implementation: The size of the adversarial patch is set as sh = 25, sw = 30, the number of sampling N in the policy gradient method is set to 5, and the variance σ of the Gaussian policy is equal to 0.01. Other parameters are set as Z = 150, α = 0.001, and K = 50.
4.2 EXPERIMENTAL RESULTS
4.2.1 PERFORMANCE OF SIMULTANEOUS OPTIMIZATION
We first evaluate our simultaneous optimization method qualitatively and quantitatively according to the setting in Section 4.1. Table 1 shows the quantitative results of dodging and impersonation attacks under the black-box setting against the five target models. For surrogate models, we explore the relationships between individual models and the model ensemble using single models and their ensemble version excluding target models, respectively. Figure 3 shows some visual examples of the position and pattern at different stages of the attack. More visual results are shown in Appendix C.
From the results in Table 1, we can see : (1) the proposed method achieves good attack success rates and query efficiency under both two kinds of attacks. The dodging attack achieves the highest success rate of 96.65% under 11 queries, while the impersonation attack achieves the highest success rate of 72.83% under 27 queries. (2) The performance of using ensemble models to attack is better than that using a single model as the surrogate model, which shows that joint optimization can adaptively adjust
2http://vis-www.cs.umass.edu/lfw/ 3http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Figure 4: The weights of each surrogate model in the ensemble attack using simultaneous optimization.
Figure 5: ASR and the NQ (shown in brackets) when using Position (P), Position and weights (P+W), and Position, weights and step size (P+W+S) respectively.
the weights of different surrogate models to achieve the best performance. (3) For the relationship between the surrogate model and the target model, when the two are structurally similar, it is more likely to help the attack. For example, for ArcFace50, CosFace50 has a greater influence on it. This is because the backbone of ArcFace50 and CosFace50 are both ResNet50-IR (Deng et al., 2019), and the backbone of FaceNet and ArcFace34 are Inception-ResNet-v1 (Szegedy et al., 2017) and ResNet34-IR (Deng et al., 2019) respectively. For the influence of model training loss, Facenet uses Euclidean (Schroff et al., 2015) rather than cosine space metric, which is less helpful to ArcFace and CosFace using angular space loss metric when used as a surrogate model alone, with ASR of dodging only 16.89% and 11.47%, respectively. Therefore, in ensemble attacks, we can dynamically adjust the importance of different models by optimizing their weights. (4) On the commercial API, performance drops slightly compared to the open-source model, but remains at an acceptable level. This is because commercial FR services introduce some defense measures like image compression.
4.2.2 COMPARISONS WITH SOTA METHODS
To prove the superiority, we compare our method with three methods: simply relying on the transferability, fixing the perturbation and only changing the position, and randomly initializing the position and only optimizing the perturbation. Specifically, for the method relying solely on the transferability, we use TI-MI-FGSM (Dong et al., 2019b) to generate adversarial examples on the surrogate model, and then directly transfer them to the target model to test their performance. For the position-only method, we adopt the latest RHDE (Guo et al., 2021) method, which fixes the content of the patch and searches for a good position on the face to attack. For the perturbation-only method, we first randomly initialize the position, and then use ZO-AdaMM (Chen et al., 2019) to obtain the adversarial perturbation. The above results on three target models are shown in Table 2.
From the results, we can see that: (1) Although it is relatively simple to rely solely on transferability, the average attack success rate is 12.09% which is not satisfactory. (2) The average success rates for RHDE and ZO-AdaMM are 38.97% and 46.85%, respectively, while our method is 65.61%, which proves that compared with the methods optimized only for position or perturbation, our joint optimization can combine these two attack modes more effectively to achieve better attack. (3) Our method achieves optimal query efficiency among several methods, requiring only a few dozen queries.
4.2.3 ABLATION STUDY
In order to verify the effectiveness of each attack parameter used in our method, we also report the results when each component is added separately. First, we fix weights as equal values and the step size as 0.1 to test the performance when only changing the position parameter. Then, we add the weight parameters to learn the importance of each surrogate model. Finally, we add the step size parameters to carry out the overall learning. The results in Figure 5 show that learning only
Table 3: Success rate of dodging (D) and impersonation (I) attack in the physical environment.
Table 4: Success rate in the physical world when changing distance, lighting and face postures including frontal (0◦), yaw angle rotation ±25◦, ±45◦, and pitch angle rotation ±15◦.
the position parameter can achieve an average success rate of 69.80%. The performance is greatly improved after adding the weight, and further improved after adding the step size. All parameters contribute to the enhancement of the overall attack effect and the weight parameter has a greater impact on the results than the step size. In the process of adding components, the number of queries (shown in brackets) is basically maintained at the same level.
We also make statistics on the weights of each model using simultaneous optimization by taking all four models as surrogate models, and Figure 4 shows the results on a face image when attacking the four threat models. It can be seen that its own model has the highest weight, which indicates that our method can find surrogate models similar to the target model in learning, and the weight of the other three models presents a positive correlation with the ASR shown in Table 1.
4.2.4 ATTACKS IN THE PHYSICAL WORLD
In this section, we show the results of our adversarial patch in the physical environment. We first perform simulated successful attacks on different subjects in the digital environment, and then conduct experiments in the physical world. The technical details of the implementation are shown in Appendix D. We record the video when faces are moving within the range of 5◦ of the current posture, and count the frame proportion of successful physical attack when results are checked every 10 frames at a frame rate of 30fps within one minute as the attack success rate. Table 3 shows the results of the frontal face. To test the performance under various physical conditions, we further change the face posture, the distance from the camera and the illumination. For face postures, we take the conditions of the frontal face, yaw angle rotation of ±25◦ (the mean value at 25◦ and −25◦), ±45◦ and pitch angle rotation of ±15◦. These results are shown in Table 4. It can be seen from Table 3 that our method maintains high physical ASR (100.0% and 83.33%) in both dodging and impersonation attacks on FaceNet, and the results are also good on ArcFace34 and CosFace50. Although there is a slight decline in commercial API, the ratio of 33.16% and 25.23% of successful frames is enough to bring potential risks to commercial applications. In Table 4, when the pose changes in a small range (± y25◦, ± p15◦), ASR still maintains a high value. Even if the deflection angle is slightly larger (± y45◦), it can still maintain an average of 31.13%. The effect of distance is small, and when lighting is changed, the results are still at an acceptable level. When performing impersonation attacks under different conditions, there is a situation where the face is recognized as a false identity different from the true identity, but not the target identity, which leads to a slightly lower result to a certain extent. A set of visual results is shown in Figure 6.
5 CONCLUSION
In this paper, we proposed a method to achieve the simultaneous optimization of the position and content to create more transferable adversarial patches. The content was generated by ensemble model attack, and the position, model weights and attack step size are set as parameters. These parameters are dynamically adjusted through a small number of queries with the target model in a reinforcement learning framework to make the patch more suitable for the target model. Extensive experiments demonstrated that our method can effectively improve the attack success rate and the query efficiency, and experiments on commercial face recognition API and physical environments confirmed the practical and effective application value of our method. Besides, the proposed method can also be adapted to other applications, such as automatic driving, etc.
A OPTIONAL AREA OF THE PASTING POSITIONS
In this section, we give more details about changing the pasting position of the adversarial patch.
It is worth noting that in order not to interfere with the liveness detection module and to maintain the concealment of the attack method, the area that does not cover the facial features of the face (e.g. cheek and forehead) is regarded as the optional pasting area. In practical applications, the liveness detection module is often used in combination with the face recognition to confirm the real physiological characteristic of the object and exclude the form of replacing real faces with photos, masks, etc. (Ming et al., 2020). It is mainly based on the depth or texture characteristics of the face skin or the movements of the object (such as blinking and opening the mouth), so the patch cannot be pasted in the area that covers the facial features (such as eyes and mouth).
Specifically, we first use dlib library to extract 81 feature points of the face and determine the effective pasting region. Figure 7 shows some examples of the effective pasting area corresponding to the face. After calculating the probability of each position Pposition through the output of the agent (see Eq. (7) in the paper), we set the probabilities of the invalid positions to 0, and then sample the pasting position.
B INTEGRATION WITH GRADIENT ESTIMATION
Our simultaneous optimization method can also be combined with the gradient estimation (e.g., Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017), natural evolution strategy (Ilyas et al., 2018), random gradient estimation (Tu et al., 2019)). It can provide a good initial position and pattern for gradient estimation to further enhance the attack performance. Here we take Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017) as an example to describe the calculation method and the experiment.
B.1 TECHNICAL DETAILS
In the Zeroth-Order (ZO) optimization, to expand the optimization range, x is often replaced with 1+tanhφ
2 (Chen et al., 2017). Since the symmetric difference quotient (Lax & Terrell, 2014) is used at φ to add a small offset at φ in the gradient estimation process, the gradient of x at φ (i.e. ∇φx) can not be too small. Therefore,∇φx is calculated as follows:
∇φx = 1
2
( 1− tanh2(φ) ) (14)
Therefore, when combining, in addition to considering the attack target, it is also necessary to ensure that the gradient of the generated perturbation in the φ-space meets the above requirements. So the loss function L(·, ·) in Eq. (5) is modified as:
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) + β sh · sw sh∑ i=1 sw∑ j=1 ∇φijx (15)
where sh and sw represent the height and width of the perturbation patch pasted to the face, and β is the scale factor. Given the paste coordinates (cx, cy), then φij = arctanh ( 2 · x̃t(cy+i,cx+j) − 1 ) .
Through the simultaneous optimization, we obtain an adversarial patch with good position and perturbation. Next, we regard it as the initialization of the gradient estimation. Specifically, we fix the position and use the gradient estimation method to only refine the perturbation by querying the threat models. After several iterations, the transferability of the adversarial patch is further improved.
B.2 EXPERIMENTAL RESULTS
The results in Table 5 show the performance of the integration with gradient estimation. It can be seen that after the combination of gradient estimation, the results have been improved to some extent compared with Table 1, but the consumption of queries has increased accordingly. For example, the success rate of impersonation attacks on CosFace50 increases from 40.08% to 71.28%, but the number of queries increases from 77 to 1575. Therefore, in practical applications, it is necessary to weigh the importance of success rate and query efficiency to determine whether to combine the gradient estimation method.
In order to confirm that the improvement in ASR is due to the good initial value provided by our simultaneous optimization method for gradient estimation, rather than the superiority of the gradient estimation method itself, a comparison is made in Table 6. The results show that simultaneous optimization (SO) can achieve a good ASR with the help of only a few queries (lower than 40 queries) both in dodging and impersonation. When combined with Zeroth-Order (ZO) optimization, the ASR has been further improved (from 72.83% to 87.37% for impersonation), but introduces more query consumption. However, we can see that Random+ZO uses much more queries (1972 and 2874, respectively) but achieves much lower ASR (65.79% and 50.52%, respectively). This contrast proves that our SO method indeed provides a good initialization for the gradient estimation.
C MORE VISUAL RESULTS
In this section, we show more visual results of the simultaneous optimization (SO) attacks. Figure 8 shows four groups of visual results. For each group of three images, the first one represents the clean image, the second one represents the image after the attack, and the third one represents the image corresponding to the wrong identity in the face database.
The above results are obtained by ensuring that the patch is not pasted to the area that covers the facial features. We also conduct experiments when allowing patches to cover facial features in a small range (no more than 20 pixels), and the results are shown in Figure 9. Interestingly, we find that the generated pattern is similar to that of the facial features at the corresponding position, but its
shape has been changed. For example, in the last group of pictures in Figure 9, the adversarial patch is pasted on the left side eyebrow of the face, and the generated pattern is similar to the shape of the eyebrow, which changes the eyebrow’s shape, and may mislead the extraction of eyebrow features by the face recognition network. This inspires us to use some information of facial features (e.g. eyes, eyebrows, nose and mouth) to mislead the feature extraction process of the face recognition, so as to achieve better attack performance.
D IMPLEMENTATION DETAILS OF THE PHYSICAL ATTACK
To make digital simulation results better adapt to the physical environment, we process the smoothness of the pattern. Specifically, during each iteration of the pattern generation, we first obtain a pattern half the size of the original patch by scaling down or averaging pooling, and then enlarge the image back to the original size by bilinear interpolation. The reduced pattern retains the key information of the pattern. After zooming, the smoothness of the image is improved, avoiding the problem that the pattern generated per pixel is sensitive to the position point when it is transferred to the physical environment. Even the pasting position is slightly a few pixels away, the attack effect can still be preserved. We also try to use Total Variation (TV) (Sharif et al., 2019) loss to enhance the smoothness, but the actual effect is not as good as the scaling process. When printing, we use photo paper rather than ordinary paper as the patch material to recreate the colors of the digital simulation as realistically as possible. | 1. What is the novel idea proposed by the paper regarding adversarial attacks?
2. What are the strengths of the proposed approach, particularly in its use of deep reinforcement learning?
3. What are the concerns or limitations of the paper, such as its treatment of position and content, and its choice of surrogate models?
4. How does the reviewer assess the effectiveness and adaptability of the proposed method in practical scenarios?
5. Are there any minor issues or typos in the paper that could be improved, such as the citation format in page 5? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a method to optimize the position and content of the perturbation at the same time to create more transferable adversarial patches. By utilizing deep reinforcement learning framework, it can predict the position, model weights, and attack step size simultaneously, which significantly improves attack performance, especially in targeted attacks, and still has good adaptability in the physical environment.
Review
Pros:
The simultaneous optimization of position and content is a novel idea because traditional methods usually use fixed positions chosen by randomness or experience. It takes advantage of the coupling between position and content, rather than treating them as independent factors like alternate optimization. Interestingly, the authors also provide some insights into this coupling through some visualizations in Appendix C.
For solving these two different types of parameters, the use of deep reinforcement learning is ingenious. The author maps the query-based attack process to the reward feedback mechanism in RL. The Unet-based design in the agent is a good way to connect the relationship between position and attack intensity. The policies of various continuous and discrete parameters and the derivation of the solution process are also relatively rigorous.
Experiments show that the method can achieve good attack effect with a few queries. For practical scenarios like face recognition, this paper provides a lot of performance verification in physical environment, which is also worthy of praise.
For model ensemble attack, the statistical results of weight change of each surrogate model in the solving process also provide some inspiration for enhancing the attack transferability.
Meanwhile, I still have some concerns as follows:
The process of face recognition includes in liveness detection, face detection, face recognition and other links. In the attack setup, the author ensures that the face before the attack can be correctly recognized. But after pasting the adversarial patch, does the success of the attack include patches interfering with the detection link? Or does it only include the interference with the final identification process?
In the experimental section, the choice of surrogate models is a bit confusing. If my understanding is correct, the experiment in Table 1 uses the remaining models excluding the target model as surrogate models, while the choice in Figure 4 contains the target model. Although the regular trend of the two results is consistent, it is somewhat unintuitive to understand.
In Page 5 "We use the policy gradient Sutton et al. (1999) method", \citep{} should be used instead of \cite{}. |
ICLR | Title
Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations
Abstract
Adversarial patch is one kind of important form to perform adversarial attacks in the real world and brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the position on the image or manipulating the position while fixing the content of the patch. In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting. We adjust the transferability by taking the position, weights of surrogate models in the ensemble attack and the attack step size as parameters, and utilize the reinforcement learning framework to simultaneously solve these parameters based on the reward information obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and the results on four representative FR models demonstrate that our method can significantly improve the attack success rate and the query efficiency. Besides, experiments on the commercial FR service and physical environments confirm the practical application value of our method.
1 INTRODUCTION
Deep neural networks (DNNs) have exposed their vulnerability to adversarial attacks, where adding small imperceptible perturbations to the image can confuse their predictions. However, this small perturbation is not suitable for real applications where images are captured through the camera. A more available way is to use a local patch-like perturbation, where the magnitude of the pixel value changes is not restricted. By printing out the adversarial patch and pasting it on the object, the attack in the real scene can be realized, and it has brought security threats to many tasks such as person detection (Xu et al., 2020), traffic sign recognition (Eykholt et al., 2018; Liu et al., 2019), and image classification (Karmon et al., 2018).
Among these tasks, Face Recognition (FR) is more safety-critical, and adversarial patches have also been successfully applied in this area (Yang et al., 2020; Komkov & Petiushko, 2019; Sharif et al., 2019; Guo et al., 2021). For example, adv-hat (Komkov & Petiushko, 2019) and adv-patch (Pautov et al., 2019; Yang et al., 2020) put the patch generated based on the gradient information on the forehead or nose to achieve attacks. Adv-glasses (Sharif et al., 2016; 2019) confuse the FR system by placing a printed perturbed eyeglass frame at the eye. The above methods mainly focus on optimizing the perturbation content, and the pasting position of the patch is fixed on a position selected based on experience or some prior knowledge. On the other hand, adv-sticker (Guo et al., 2021) adopts a predefined meaningful adversarial patch but uses an evolutionary algorithm to search for a good patch’s position to perform the attack. The above methods enlighten us that if we optimize the position and perturbations at the same time, better attack performance can be achieved. However, due to the strong coupling relationship between the position and perturbation values, it is difficult to use the simple gradient-based method or alternate iterative optimization to solve them simultaneously, leaving a challenging problem unsolved.
In practical applications, the detailed information about FR threat model usually cannot be accessed. In such case, the existence of transferability of adversarial examples will play an important role. If the adversarial patch generated on the surrogate model has good transferability on the target model, it will
destroy the robustness of the model in an easy-to-implement way, and bring serious consequences. Therefore, an in-depth study on improving the transferability of the adversarial patch can greatly help test the robustness of the FR model in real-world applications.
Based on the above considerations, in this paper, we propose a method to simultaneously optimize the position and perturbations of the adversarial patch to improve the black-box attack performance with limited information. However, directly optimizing them will lead to a large number of queries, especially for the perturbations. Because each pixel can range from 0 to 255, the perturbation searching space is so huge. To tackle this issue, we instead use improved I-FGSM with high transferability (Dong et al., 2018; 2019b; Wang et al., 2021) based on the ensemble of surrogate models (Liu et al., 2016) to generate patch’s perturbations, and then adjust the attack step size and the weight of each surrogate model to achieve the perturbations. Compared with directly optimizing the perturbation value for each pixel, changing the step size and weights can greatly reduce the parameter space and improve the solving efficiency. Based on this setting, the patch’s position, surrogate models’ weights and attack step size are the final key parameters needing to learn.
Naturally, simply using some predefined parameters will result in poor transferability, and thus cannot make the adversarial patch achieve satisfactory adaptability to the target model. To further improve the transferability, we use a small number of queries on the target model to dynamically adjust the attack parameters, and this process can be formulated into the Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the target model, and an Unet-based (Ronneberger et al., 2015) network is used as the agent to simultaneously predict the parameters (actions) to generate better adversarial patches. By interacting with the target model, the agent can obtain a reward signal feedback to guide its learning by maximizing this reward (Li, 2017). The whole scheme is illustrated in Figure 1.
In summary, this paper has the following contributions:
• We argue that the position and perturbations of adversarial patch are equally important and interact each other closely. Therefore, a method is proposed to simultaneously optimize them to generate transferable adversarial patches, rather than alternate iterative optimization.
• We dynamically adjust the parameters in the transfer-based attack through a small number of queries and formulate the process into a Reinforcement Learning framework, which can guide the agent to generate better parameters with high query efficiency.
• Extensive experiments in dodging and impersonation tasks confirm that our method can realize the state-of-the-art attack performance and query efficiency (maximum success rate of 96.65% with only 11 queries on average), and experiments in the commercial API and physical environment prove the good application value of our method.
2 RELATED WORKS
2.1 ADVERSARIAL PATCH
Compared with Lp norm based adversarial perturbations, adversarial patch (Brown et al., 2017) is a more suitable attack form for real-world applications where objects need to be captured by a camera. Different from pixel-wise imperceptible perturbation, adversarial patch does not restrict the
perturbations’ magnitude. Up to now, adversarial patch has been applied to image classifiers (Jia et al., 2020; Karmon et al., 2018), person detectors (Xu et al., 2020), traffic sign detectors (Eykholt et al., 2018; Liu et al., 2019) and many other security-critical systems. For example, the adversarial T-shirt (Xu et al., 2020) evades the person detector by printing the perturbation patch generated by the gradient information in the optimization framework on the center of the T-shirt. Recently, Rao et al. (2020) also propose to jointly optimize location and content of adversarial patch. However, our method is different from it for two aspects: (1) They optimize the location and content via an alternate iterative manner, while our method simultaneously solves them. Because location and content have strong interactions, simultaneously optimizing them is more reasonable and thus can achieve better solution. (2) They belong to white-box attack against image classification task, while our method is a black-box attack versus face recognition task. Therefore, our task is more challenging.
2.2 ADVERSARIAL PATCH IN THE FACE RECOGNITION
Adversarial patches also bring risks to face recognition and detection tasks, and their attack forms can be roughly divided into two categories. On the one hand, some methods fix the patch on a specific position of the face selected based on the experience or prior knowledge, and then generate the perturbations of the patch. For example, adversarial hat (Komkov & Petiushko, 2019), adv-patch (Pautov et al., 2019) and adversarial glasses (Sharif et al., 2016; 2019) are classical methods against face recognition models which are realized by placing perturbation stickers on the forehead or nose, or putting the perturbation eyeglasses on the eyes. Yang et al. (2020) put universal adversarial patches on the fixed position of the face to prevent the face detectors from detecting the real faces. The main concern of these methods is to mainly focus on generating available adversarial perturbation patterns but without much consideration of the impact of patch’s position versus the attack performance.
On the other hand, some methods fix the content of the adversarial patch and search for the optimal pasting position within the valid pasting area of the face. Adv-sticker (Guo et al., 2021) uses patternfixed stickers existing in the real life, and changes their positions through RHDE algorithm based on the idea of differential evolution to attack FR systems. Inspired by this, we believe that the position and perturbations of the adversarial patch are equally important to attack the face recognition system, and if the two are optimized simultaneously, the attack performance can be further improved.
2.3 DEEP REINFORCEMENT LEARNING
Deep reinforcement learning (DRL) combines the perception ability of deep learning with the decision-making ability of reinforcement learning, so that the agent can make appropriate behaviors through the interaction with the environment (Li, 2017; Dong et al., 2019a). It receives the reward signal to evaluate the performance of an action taken through the agent without any supervisory information, and can be used to solve multiple tasks such as parameter optimization and computer vision (Li, 2017). In this paper, we use the information obtained by querying the target model to solve the attack parameters to generate transferable adversarial examples, which can be formalized as the process of using reward signals to guide the agent’s learning in the reinforcement learning. Therefore, an agent based on Unet (Ronneberger et al., 2015) is designed to learn parameter selection policies, and generate better attack parameters under the reward signals obtained by querying the target model.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
In the face recognition task, given a clean face image x, the goal of the adversarial attack is to make the face recognition model predict a wrong identity of the perturbed face image xadv . Formally, the perturbed face with the adversarial patch can be formulated as Eq. (1), where is Hadamard product and x̃ is the adversarial patch with perturbations across the whole face image. A is a mask matrix to constrain the shape and pasting position of the patch, where the value of the pasting area is 1.
xadv = (1−A) x+A x̃ (1)
The previous methods either optimize x̃ with pre-fixed A, or fix x̃ to select the optimal A. In our method, we optimize A and x̃ simultaneously to further improve the attack performance. For the optimization of the mask matrix A, we fix the shape and the size (sh, sw) of the adversarial patch, and change its upper-left coordinates (cx, cy) to adjust the mask matrix, and the corresponding mask is defined as Ac. In order not to interfere with liveness detection module, we limit the pasting
position to the area that does not cover the facial features like eyes, mouth, etc. For the detailed process, please refer to Appendix A. Thus, in our method, the adversarial patch can be expressed as:
xadv = (1−Ac) x+Ac x̃ (2) To generate the perturbation content, the ensemble attack (Liu et al., 2016) based on MI-FGSM (Dong et al., 2018) is here used to generate the transferable patch. For the ensemble attack of n surrogate models, let ρi denote the weight of each surrogate model fi and denote the attack step size. Then taking the un-targeted attack (or dodging in the face recognition case) as an example, given the ground truth identity y, let fi(x, y) denote the confidence score that the model predicts a face image x as identity y, then x̃ can be computed by an iteration way. Let t denote the t-th iteration, then:
xadv = x̃t+1 = (1−Ac) x+Ac (x̃t + · sign (gt+1)) (3)
gt+1 = µ · gt + ∇xL (x̃t, y) ‖∇xL (x̃t, y)‖1
(4)
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) (5)
where ` (x̃t, y, fi) = 1− fi (x̃t, y) and ∑n i=1 ρi = 1. For the targeted attack (or impersonation in the face recognition case), given the target identity ŷ, ` can be simply replaced by fi (x̃t, ŷ).
Our attack goal is to simultaneously optimize both the patch position and the perturbation to generate good transferable adversarial patches to attack the target model. Therefore, the mask Ac, the attack step size in Eq.(3) and weights ρi in Eq.(5) are set as the learned parameters. To make the parameters more suitable for the target model, we adjust the parameters dynamically through a small number of queries on the target model. The details of solving these parameters are shown in Sec. 3.2.
3.2 ATTACKS BASED ON RL
3.2.1 FORMULATION OVERVIEW USING RL
The generation of the transferable adversarial patch on the surrogate model is guided by the information returned by querying the target model, and this process can be represented as the learning of the agent through the reward signal obtained by interacting with the environment in the reinforcement learning (Li, 2017). So an agent is constructed to learn the selection policy of the attack parameters.
In our method, the parameter values are defined as the actions generated by the agent under the guidance of the policy π, and at denotes the t-th action (i.e., the value of t-th parameter). The image feature input to the agent is defined as the state s, and the environment is the threat model F (·). The policy function πθ(a |s) with parameters θ is a rule used by the agent to decide what action to take, which can be formulated as a probability distribution of action a in the state s.
The reward reflects the performance of the currently generated adversarial patch on the target model, and the training goal of the agent is to learn good policies to maximize the reward signal. In face recognition, the goal of the dodging attack is to generate images that are as far away as possible from the ground-truth identity, while impersonation attacks want to generate images that are as similar as possible to the target identity. Thus, the reward function R is formalized as:
R =
{ ` ( xadv, y, F ) = 1− F ( xadv, y ) if dodging
` ( xadv, ŷ, F ) = F ( xadv, ŷ ) if impersonation (6)
In iterative training, the agent firstly predicts a set of parameters according to policy π, and then the adversarial patch based on the predicted parameters is generated. Finally the generated adversarial face image is input to the threat model to obtain the reward value. In this process, policy gradient algorithm (Sutton et al., 1999) is used to guide the agent’s policy update. After multiple pieces of training, the agent will generate actions that perform well on the threat model with a high probability.
3.2.2 DESIGN OF THE AGENT
The agent needs to learn the policies of the position, weights and attack step size. Considering the simultaneous solution of the position and other parameters, the design of the agent uses the U-net (Ronneberger et al., 2015) based structure, which can output the feature map of the same length and width as the image matrix fed into the network. Let the number of surrogate models be n, we design
the agent to output the feature map with n channels and the same length h and width w as the input image (i.e. the size is n× h× w). In each channel Mi(i = 1, . . . , n) of the feature map M , the relative value of each pixel point represents the importance of each position for the surrogate model fi, and the overall value of the channel reflects the importance of the corresponding surrogate model. We believe that the patch requires different attack strengths in different locations, so at the top layer of the agent network, a fully connected layer is used to map the feature map M to a vector V representing different attack step values. The structural details of the agent are shown in Figure 2.
Specifically, for the position action, the optional range of positions is discrete, so the position policy π1θ is designed to follow Categorical distribution (Murphy, 2012). Given the probability Pposition of each selected position, the position parameters (cx, cy) ∼ Cat (Pposition ), and Pposition is computed:
Pposition = 1
n n∑ i=1 softmax (Mi) (7)
For the weight actions, the weight ratio of the loss on each surrogate model fi to the ensemble loss is a continuous value, and we set the weight policy π2θ to follow Gaussian distribution (Murphy, 2012). So the i-th weight parameter ρi ∼ N ( µfi , σ 2 ) (i=1, . . . , n), and µfi is calculated as:
µfi = softmax ( M1,M2, . . . ,Mn ) i
(8)
where Mi refers to the mean value of the i-th channel in the feature map, and σ is a hyperparameter. In the actual sampling, we use the clipping operation to make the sum of weights equal to 1.
For the attack step action, we set 20 values in the range of 0.01 to 0.2 at intervals of 0.01, and adopt Categorical distribution (Murphy, 2012) as the step size policy π3θ due to the discreteness of the values. So the step size parameter ∼ Cat (pstep), and probability pstep of each candidate value is:
pstep = softmax (FC (Pposition)) (9)
By sampling from the corresponding distribution, we can obtain (cx, cy), ρi(i = 1, ..., n) and .
3.2.3 POLICY UPDATE
In the agent training, the goal is to make the agent hθ learn a good policy πθ with parameters θ to maximize the expectation of the reward R. Assume that there are T attack parameters to be solved, and τ = (s, a1, a2, . . . , aT ) is a decision result, then the optimal policy parameters θ∗ can be formulated as:
θ∗ = argmax θ J(θ) = argmax θ Eτ∼πθ(a|s)[R(τ)] (10)
We use the policy gradient Sutton et al. (1999) method to solve θ∗ by the gradient ascent method, and follow the REINFORCE algorithm (Williams, 1992), using the average value of N sampling of the policy function distribution to approximate the policy gradient∇θJ(θ):
∇θJ(θ) = Eτ∼πθ(a|s) [ T∑ t=1 ∇θ log πθ (at |s)R(τ) ] ≈ 1 N N∑ n=1 T∑ t=1 ∇θ log πθ (at |s)Rn (11)
Algorithm 1 Simultaneous optimization for position and perturbations of adversarial patch Input: Face image x, target ŷ, agent hθ, threat model F (·), n surrogate models fi(i=1, . . . , n),
iterations Z in MI-FGSM, sample times N , patch size sh, sw, learning rate α, training epoch K. Output: xadv
1: for k = 1 to K do 2: feature map M ← hθ; policies π1θ , π2θ , π3θ ← according to Eq. (7), Eq. (8), Eq. (9) ; 3: for i = 1 to N do 4: actions (cx, cy, ρ1, ρ2, . . . , ρn, )← sampling from π1θ , π2θ , π3θ ; A ← (cx, cy) and sh, sw ; 5: for t = 1 to Z do 6: gt ← according to Eq. (5) and Eq. (4) ; x̃t← according to Eq. (3); 7: end for 8: Update Ri← according to Eq. (6); 9: end for
10: ∇θJ(θ)← according to Eq. (11) ; θ← θ + α · ∇θJ(θ) ; 11: if F ( xadv ) = ŷ then break; 12: end for 13: return xadv
where Rn is the reward in the n-th sampling. When updating the policy with the parameter θ, the reward R can be regarded as the step size. The larger the reward, the larger the step size. If the reward is negative, it will go to the opposite direction. In this way, the agent can learn good policy functions with the update of θ in the direction of increasing the reward.
For actions that follow the Categorical policy (i.e., the position parameter and attack step size), given the probability vector p (i.e., Pposition and pstep in Eq. (7) and Eq. (9)), let p(a) denote the probability of action a in the probability vector p, then for π1θ and π 3 θ , ∇θ log πθ(a | s) in Eq. (11) can be calculated as:
∇θ log πθ(a | s) = d log p(a)
dθ (12)
For actions that follow the Gaussian policy distribution (i.e. the weights of the surrogate models), the mean value µf of Gaussian distribution is calculated by the output of the agent, so µf can be expressed as hθ(s) = µf . Therefore, for π2θ following the Gaussian policy, ∇θ log πθ(a | s) in Eq. (11) can be calculated as follows:
∇θ log πθ(a | s) = ∇θ [ − (a− hθ(s)) 2
2σ2 − log(σ)− log(
√ 2π) ] = a− hθ(s)
σ2 · dh dθ
(13)
3.3 OVERALL FRAMEWORK
The complete process of our method to simultaneously optimize the position and perturbation is given in Algorithm 1. In the K iterations of the agent training, policy functions are firstly calculated according to the output of the agent, and then perform N sampling according to the probability distribution of the policy function to generate N sets of parameters. According to each set of parameters, the attack is conducted on surrogate models and the generated transferable adversarial examples are used to query the target model to obtain the reward, and then the policy is updated. During this process, if a successful attack has been achieved, the iteration is stopped early.
Furthermore, our simultaneous optimization method can also be combined with other existing blackbox attack methods like gradient estimation to further enhance the attack performance. Detailed combination process and experimental results are shown in Appendix B.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Target models: We choose five face recognition models as threat models, including four representative open-source face recognition models (i.e. FaceNet (Schroff et al., 2015), CosFace50 (Wang et al., 2018), ArcFace34 and ArcFace50 (Deng et al., 2019)) and one commercial face recognition API 1. During the attack, the four open-source models are candidates for surrogate models, and each model and their ensemble version that excludes the target model will be used as surrogate models.
1https://intl.cloud.tencent.com/product/facerecognition
Face database: We randomly select 5,752 different people from Labeled Faces in the Wild (LFW)2 and CelebFaces Attribute (CelebA)3 to construct the face database. We use the above models to extract the face features, and then calculate the cosine similarity with all identities in the face database to perform the 1-to-N identification.
Metrics: Two metrics, attack success rate (ASR) and the number of queries (NQ) are used to evaluate the performance of attacks. ASR refers to the proportion of images that are successfully attacked in all test face images, where we ensure that the clean test images selected in the experiment can be correctly identified. NQ refers to the number of queries to the target model required by the adversarial patch that can achieve a successful attack.
Implementation: The size of the adversarial patch is set as sh = 25, sw = 30, the number of sampling N in the policy gradient method is set to 5, and the variance σ of the Gaussian policy is equal to 0.01. Other parameters are set as Z = 150, α = 0.001, and K = 50.
4.2 EXPERIMENTAL RESULTS
4.2.1 PERFORMANCE OF SIMULTANEOUS OPTIMIZATION
We first evaluate our simultaneous optimization method qualitatively and quantitatively according to the setting in Section 4.1. Table 1 shows the quantitative results of dodging and impersonation attacks under the black-box setting against the five target models. For surrogate models, we explore the relationships between individual models and the model ensemble using single models and their ensemble version excluding target models, respectively. Figure 3 shows some visual examples of the position and pattern at different stages of the attack. More visual results are shown in Appendix C.
From the results in Table 1, we can see : (1) the proposed method achieves good attack success rates and query efficiency under both two kinds of attacks. The dodging attack achieves the highest success rate of 96.65% under 11 queries, while the impersonation attack achieves the highest success rate of 72.83% under 27 queries. (2) The performance of using ensemble models to attack is better than that using a single model as the surrogate model, which shows that joint optimization can adaptively adjust
2http://vis-www.cs.umass.edu/lfw/ 3http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Figure 4: The weights of each surrogate model in the ensemble attack using simultaneous optimization.
Figure 5: ASR and the NQ (shown in brackets) when using Position (P), Position and weights (P+W), and Position, weights and step size (P+W+S) respectively.
the weights of different surrogate models to achieve the best performance. (3) For the relationship between the surrogate model and the target model, when the two are structurally similar, it is more likely to help the attack. For example, for ArcFace50, CosFace50 has a greater influence on it. This is because the backbone of ArcFace50 and CosFace50 are both ResNet50-IR (Deng et al., 2019), and the backbone of FaceNet and ArcFace34 are Inception-ResNet-v1 (Szegedy et al., 2017) and ResNet34-IR (Deng et al., 2019) respectively. For the influence of model training loss, Facenet uses Euclidean (Schroff et al., 2015) rather than cosine space metric, which is less helpful to ArcFace and CosFace using angular space loss metric when used as a surrogate model alone, with ASR of dodging only 16.89% and 11.47%, respectively. Therefore, in ensemble attacks, we can dynamically adjust the importance of different models by optimizing their weights. (4) On the commercial API, performance drops slightly compared to the open-source model, but remains at an acceptable level. This is because commercial FR services introduce some defense measures like image compression.
4.2.2 COMPARISONS WITH SOTA METHODS
To prove the superiority, we compare our method with three methods: simply relying on the transferability, fixing the perturbation and only changing the position, and randomly initializing the position and only optimizing the perturbation. Specifically, for the method relying solely on the transferability, we use TI-MI-FGSM (Dong et al., 2019b) to generate adversarial examples on the surrogate model, and then directly transfer them to the target model to test their performance. For the position-only method, we adopt the latest RHDE (Guo et al., 2021) method, which fixes the content of the patch and searches for a good position on the face to attack. For the perturbation-only method, we first randomly initialize the position, and then use ZO-AdaMM (Chen et al., 2019) to obtain the adversarial perturbation. The above results on three target models are shown in Table 2.
From the results, we can see that: (1) Although it is relatively simple to rely solely on transferability, the average attack success rate is 12.09% which is not satisfactory. (2) The average success rates for RHDE and ZO-AdaMM are 38.97% and 46.85%, respectively, while our method is 65.61%, which proves that compared with the methods optimized only for position or perturbation, our joint optimization can combine these two attack modes more effectively to achieve better attack. (3) Our method achieves optimal query efficiency among several methods, requiring only a few dozen queries.
4.2.3 ABLATION STUDY
In order to verify the effectiveness of each attack parameter used in our method, we also report the results when each component is added separately. First, we fix weights as equal values and the step size as 0.1 to test the performance when only changing the position parameter. Then, we add the weight parameters to learn the importance of each surrogate model. Finally, we add the step size parameters to carry out the overall learning. The results in Figure 5 show that learning only
Table 3: Success rate of dodging (D) and impersonation (I) attack in the physical environment.
Table 4: Success rate in the physical world when changing distance, lighting and face postures including frontal (0◦), yaw angle rotation ±25◦, ±45◦, and pitch angle rotation ±15◦.
the position parameter can achieve an average success rate of 69.80%. The performance is greatly improved after adding the weight, and further improved after adding the step size. All parameters contribute to the enhancement of the overall attack effect and the weight parameter has a greater impact on the results than the step size. In the process of adding components, the number of queries (shown in brackets) is basically maintained at the same level.
We also make statistics on the weights of each model using simultaneous optimization by taking all four models as surrogate models, and Figure 4 shows the results on a face image when attacking the four threat models. It can be seen that its own model has the highest weight, which indicates that our method can find surrogate models similar to the target model in learning, and the weight of the other three models presents a positive correlation with the ASR shown in Table 1.
4.2.4 ATTACKS IN THE PHYSICAL WORLD
In this section, we show the results of our adversarial patch in the physical environment. We first perform simulated successful attacks on different subjects in the digital environment, and then conduct experiments in the physical world. The technical details of the implementation are shown in Appendix D. We record the video when faces are moving within the range of 5◦ of the current posture, and count the frame proportion of successful physical attack when results are checked every 10 frames at a frame rate of 30fps within one minute as the attack success rate. Table 3 shows the results of the frontal face. To test the performance under various physical conditions, we further change the face posture, the distance from the camera and the illumination. For face postures, we take the conditions of the frontal face, yaw angle rotation of ±25◦ (the mean value at 25◦ and −25◦), ±45◦ and pitch angle rotation of ±15◦. These results are shown in Table 4. It can be seen from Table 3 that our method maintains high physical ASR (100.0% and 83.33%) in both dodging and impersonation attacks on FaceNet, and the results are also good on ArcFace34 and CosFace50. Although there is a slight decline in commercial API, the ratio of 33.16% and 25.23% of successful frames is enough to bring potential risks to commercial applications. In Table 4, when the pose changes in a small range (± y25◦, ± p15◦), ASR still maintains a high value. Even if the deflection angle is slightly larger (± y45◦), it can still maintain an average of 31.13%. The effect of distance is small, and when lighting is changed, the results are still at an acceptable level. When performing impersonation attacks under different conditions, there is a situation where the face is recognized as a false identity different from the true identity, but not the target identity, which leads to a slightly lower result to a certain extent. A set of visual results is shown in Figure 6.
5 CONCLUSION
In this paper, we proposed a method to achieve the simultaneous optimization of the position and content to create more transferable adversarial patches. The content was generated by ensemble model attack, and the position, model weights and attack step size are set as parameters. These parameters are dynamically adjusted through a small number of queries with the target model in a reinforcement learning framework to make the patch more suitable for the target model. Extensive experiments demonstrated that our method can effectively improve the attack success rate and the query efficiency, and experiments on commercial face recognition API and physical environments confirmed the practical and effective application value of our method. Besides, the proposed method can also be adapted to other applications, such as automatic driving, etc.
A OPTIONAL AREA OF THE PASTING POSITIONS
In this section, we give more details about changing the pasting position of the adversarial patch.
It is worth noting that in order not to interfere with the liveness detection module and to maintain the concealment of the attack method, the area that does not cover the facial features of the face (e.g. cheek and forehead) is regarded as the optional pasting area. In practical applications, the liveness detection module is often used in combination with the face recognition to confirm the real physiological characteristic of the object and exclude the form of replacing real faces with photos, masks, etc. (Ming et al., 2020). It is mainly based on the depth or texture characteristics of the face skin or the movements of the object (such as blinking and opening the mouth), so the patch cannot be pasted in the area that covers the facial features (such as eyes and mouth).
Specifically, we first use dlib library to extract 81 feature points of the face and determine the effective pasting region. Figure 7 shows some examples of the effective pasting area corresponding to the face. After calculating the probability of each position Pposition through the output of the agent (see Eq. (7) in the paper), we set the probabilities of the invalid positions to 0, and then sample the pasting position.
B INTEGRATION WITH GRADIENT ESTIMATION
Our simultaneous optimization method can also be combined with the gradient estimation (e.g., Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017), natural evolution strategy (Ilyas et al., 2018), random gradient estimation (Tu et al., 2019)). It can provide a good initial position and pattern for gradient estimation to further enhance the attack performance. Here we take Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017) as an example to describe the calculation method and the experiment.
B.1 TECHNICAL DETAILS
In the Zeroth-Order (ZO) optimization, to expand the optimization range, x is often replaced with 1+tanhφ
2 (Chen et al., 2017). Since the symmetric difference quotient (Lax & Terrell, 2014) is used at φ to add a small offset at φ in the gradient estimation process, the gradient of x at φ (i.e. ∇φx) can not be too small. Therefore,∇φx is calculated as follows:
∇φx = 1
2
( 1− tanh2(φ) ) (14)
Therefore, when combining, in addition to considering the attack target, it is also necessary to ensure that the gradient of the generated perturbation in the φ-space meets the above requirements. So the loss function L(·, ·) in Eq. (5) is modified as:
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) + β sh · sw sh∑ i=1 sw∑ j=1 ∇φijx (15)
where sh and sw represent the height and width of the perturbation patch pasted to the face, and β is the scale factor. Given the paste coordinates (cx, cy), then φij = arctanh ( 2 · x̃t(cy+i,cx+j) − 1 ) .
Through the simultaneous optimization, we obtain an adversarial patch with good position and perturbation. Next, we regard it as the initialization of the gradient estimation. Specifically, we fix the position and use the gradient estimation method to only refine the perturbation by querying the threat models. After several iterations, the transferability of the adversarial patch is further improved.
B.2 EXPERIMENTAL RESULTS
The results in Table 5 show the performance of the integration with gradient estimation. It can be seen that after the combination of gradient estimation, the results have been improved to some extent compared with Table 1, but the consumption of queries has increased accordingly. For example, the success rate of impersonation attacks on CosFace50 increases from 40.08% to 71.28%, but the number of queries increases from 77 to 1575. Therefore, in practical applications, it is necessary to weigh the importance of success rate and query efficiency to determine whether to combine the gradient estimation method.
In order to confirm that the improvement in ASR is due to the good initial value provided by our simultaneous optimization method for gradient estimation, rather than the superiority of the gradient estimation method itself, a comparison is made in Table 6. The results show that simultaneous optimization (SO) can achieve a good ASR with the help of only a few queries (lower than 40 queries) both in dodging and impersonation. When combined with Zeroth-Order (ZO) optimization, the ASR has been further improved (from 72.83% to 87.37% for impersonation), but introduces more query consumption. However, we can see that Random+ZO uses much more queries (1972 and 2874, respectively) but achieves much lower ASR (65.79% and 50.52%, respectively). This contrast proves that our SO method indeed provides a good initialization for the gradient estimation.
C MORE VISUAL RESULTS
In this section, we show more visual results of the simultaneous optimization (SO) attacks. Figure 8 shows four groups of visual results. For each group of three images, the first one represents the clean image, the second one represents the image after the attack, and the third one represents the image corresponding to the wrong identity in the face database.
The above results are obtained by ensuring that the patch is not pasted to the area that covers the facial features. We also conduct experiments when allowing patches to cover facial features in a small range (no more than 20 pixels), and the results are shown in Figure 9. Interestingly, we find that the generated pattern is similar to that of the facial features at the corresponding position, but its
shape has been changed. For example, in the last group of pictures in Figure 9, the adversarial patch is pasted on the left side eyebrow of the face, and the generated pattern is similar to the shape of the eyebrow, which changes the eyebrow’s shape, and may mislead the extraction of eyebrow features by the face recognition network. This inspires us to use some information of facial features (e.g. eyes, eyebrows, nose and mouth) to mislead the feature extraction process of the face recognition, so as to achieve better attack performance.
D IMPLEMENTATION DETAILS OF THE PHYSICAL ATTACK
To make digital simulation results better adapt to the physical environment, we process the smoothness of the pattern. Specifically, during each iteration of the pattern generation, we first obtain a pattern half the size of the original patch by scaling down or averaging pooling, and then enlarge the image back to the original size by bilinear interpolation. The reduced pattern retains the key information of the pattern. After zooming, the smoothness of the image is improved, avoiding the problem that the pattern generated per pixel is sensitive to the position point when it is transferred to the physical environment. Even the pasting position is slightly a few pixels away, the attack effect can still be preserved. We also try to use Total Variation (TV) (Sharif et al., 2019) loss to enhance the smoothness, but the actual effect is not as good as the scaling process. When printing, we use photo paper rather than ordinary paper as the patch material to recreate the colors of the digital simulation as realistically as possible. | 1. What is the focus of the paper regarding generating transferable adversarial patches?
2. What are the strengths of the proposed approach, particularly in comparison to prior works?
3. What are the weaknesses of the paper, especially regarding its experimental setup and potential applications?
4. How does the reviewer assess the novelty and effectiveness of the proposed method?
5. Are there any concerns or questions about the practicality and applicability of the method in real-world scenarios? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches. The content was generated by ensemble model attack, and the position, model weights, and attack step size are set as parameters. These parameters are dynamically adjusted through a small number of queries in a reinforcement learning framework. Their method not only outperforms the previous methods in attack success rate but also significantly improves the query efficiency. And experiments in the physical environment confirmed the practical and effective application value of the method.
Review
Strengths: This paper generates transferable adversarial patches by optimizing the position and perturbation simultaneously, which exhibits a good novelty compared with previous methods that optimize perturbation values while fixing the position or optimize the position while fixing the content of the patch. They use a reinforcement learning framework to solve this problem, and significantly improve the attack success with a small number of queries. The author provides the code to ensure its reproducibility. Weaknesses: Although this paper shows the results of the adversarial patch in the physical environment, there are no experiments to verify the effect of the proposed attacks on the model with defense measures, such as adversarial training. In addition, optimizing the location and content simultaneously on the model without defensive measures results in a better attack effect, so is it consistent on the model with defensive measures? |
ICLR | Title
Generating Transferable Adversarial Patch by Simultaneously Optimizing its Position and Perturbations
Abstract
Adversarial patch is one kind of important form to perform adversarial attacks in the real world and brings serious risks to the robustness of deep neural networks. Previous methods generate adversarial patches by either optimizing their perturbation values while fixing the position on the image or manipulating the position while fixing the content of the patch. In this paper, we propose a method to simultaneously optimize the position and perturbation to generate transferable adversarial patches, and thus obtain high attack success rates in the black-box setting. We adjust the transferability by taking the position, weights of surrogate models in the ensemble attack and the attack step size as parameters, and utilize the reinforcement learning framework to simultaneously solve these parameters based on the reward information obtained from the target model with a small number of queries. Extensive experiments are conducted on the Face Recognition (FR) task, and the results on four representative FR models demonstrate that our method can significantly improve the attack success rate and the query efficiency. Besides, experiments on the commercial FR service and physical environments confirm the practical application value of our method.
1 INTRODUCTION
Deep neural networks (DNNs) have exposed their vulnerability to adversarial attacks, where adding small imperceptible perturbations to the image can confuse their predictions. However, this small perturbation is not suitable for real applications where images are captured through the camera. A more available way is to use a local patch-like perturbation, where the magnitude of the pixel value changes is not restricted. By printing out the adversarial patch and pasting it on the object, the attack in the real scene can be realized, and it has brought security threats to many tasks such as person detection (Xu et al., 2020), traffic sign recognition (Eykholt et al., 2018; Liu et al., 2019), and image classification (Karmon et al., 2018).
Among these tasks, Face Recognition (FR) is more safety-critical, and adversarial patches have also been successfully applied in this area (Yang et al., 2020; Komkov & Petiushko, 2019; Sharif et al., 2019; Guo et al., 2021). For example, adv-hat (Komkov & Petiushko, 2019) and adv-patch (Pautov et al., 2019; Yang et al., 2020) put the patch generated based on the gradient information on the forehead or nose to achieve attacks. Adv-glasses (Sharif et al., 2016; 2019) confuse the FR system by placing a printed perturbed eyeglass frame at the eye. The above methods mainly focus on optimizing the perturbation content, and the pasting position of the patch is fixed on a position selected based on experience or some prior knowledge. On the other hand, adv-sticker (Guo et al., 2021) adopts a predefined meaningful adversarial patch but uses an evolutionary algorithm to search for a good patch’s position to perform the attack. The above methods enlighten us that if we optimize the position and perturbations at the same time, better attack performance can be achieved. However, due to the strong coupling relationship between the position and perturbation values, it is difficult to use the simple gradient-based method or alternate iterative optimization to solve them simultaneously, leaving a challenging problem unsolved.
In practical applications, the detailed information about FR threat model usually cannot be accessed. In such case, the existence of transferability of adversarial examples will play an important role. If the adversarial patch generated on the surrogate model has good transferability on the target model, it will
destroy the robustness of the model in an easy-to-implement way, and bring serious consequences. Therefore, an in-depth study on improving the transferability of the adversarial patch can greatly help test the robustness of the FR model in real-world applications.
Based on the above considerations, in this paper, we propose a method to simultaneously optimize the position and perturbations of the adversarial patch to improve the black-box attack performance with limited information. However, directly optimizing them will lead to a large number of queries, especially for the perturbations. Because each pixel can range from 0 to 255, the perturbation searching space is so huge. To tackle this issue, we instead use improved I-FGSM with high transferability (Dong et al., 2018; 2019b; Wang et al., 2021) based on the ensemble of surrogate models (Liu et al., 2016) to generate patch’s perturbations, and then adjust the attack step size and the weight of each surrogate model to achieve the perturbations. Compared with directly optimizing the perturbation value for each pixel, changing the step size and weights can greatly reduce the parameter space and improve the solving efficiency. Based on this setting, the patch’s position, surrogate models’ weights and attack step size are the final key parameters needing to learn.
Naturally, simply using some predefined parameters will result in poor transferability, and thus cannot make the adversarial patch achieve satisfactory adaptability to the target model. To further improve the transferability, we use a small number of queries on the target model to dynamically adjust the attack parameters, and this process can be formulated into the Reinforcement Learning (RL) framework. Specifically, the environment in RL is set as the target model, and an Unet-based (Ronneberger et al., 2015) network is used as the agent to simultaneously predict the parameters (actions) to generate better adversarial patches. By interacting with the target model, the agent can obtain a reward signal feedback to guide its learning by maximizing this reward (Li, 2017). The whole scheme is illustrated in Figure 1.
In summary, this paper has the following contributions:
• We argue that the position and perturbations of adversarial patch are equally important and interact each other closely. Therefore, a method is proposed to simultaneously optimize them to generate transferable adversarial patches, rather than alternate iterative optimization.
• We dynamically adjust the parameters in the transfer-based attack through a small number of queries and formulate the process into a Reinforcement Learning framework, which can guide the agent to generate better parameters with high query efficiency.
• Extensive experiments in dodging and impersonation tasks confirm that our method can realize the state-of-the-art attack performance and query efficiency (maximum success rate of 96.65% with only 11 queries on average), and experiments in the commercial API and physical environment prove the good application value of our method.
2 RELATED WORKS
2.1 ADVERSARIAL PATCH
Compared with Lp norm based adversarial perturbations, adversarial patch (Brown et al., 2017) is a more suitable attack form for real-world applications where objects need to be captured by a camera. Different from pixel-wise imperceptible perturbation, adversarial patch does not restrict the
perturbations’ magnitude. Up to now, adversarial patch has been applied to image classifiers (Jia et al., 2020; Karmon et al., 2018), person detectors (Xu et al., 2020), traffic sign detectors (Eykholt et al., 2018; Liu et al., 2019) and many other security-critical systems. For example, the adversarial T-shirt (Xu et al., 2020) evades the person detector by printing the perturbation patch generated by the gradient information in the optimization framework on the center of the T-shirt. Recently, Rao et al. (2020) also propose to jointly optimize location and content of adversarial patch. However, our method is different from it for two aspects: (1) They optimize the location and content via an alternate iterative manner, while our method simultaneously solves them. Because location and content have strong interactions, simultaneously optimizing them is more reasonable and thus can achieve better solution. (2) They belong to white-box attack against image classification task, while our method is a black-box attack versus face recognition task. Therefore, our task is more challenging.
2.2 ADVERSARIAL PATCH IN THE FACE RECOGNITION
Adversarial patches also bring risks to face recognition and detection tasks, and their attack forms can be roughly divided into two categories. On the one hand, some methods fix the patch on a specific position of the face selected based on the experience or prior knowledge, and then generate the perturbations of the patch. For example, adversarial hat (Komkov & Petiushko, 2019), adv-patch (Pautov et al., 2019) and adversarial glasses (Sharif et al., 2016; 2019) are classical methods against face recognition models which are realized by placing perturbation stickers on the forehead or nose, or putting the perturbation eyeglasses on the eyes. Yang et al. (2020) put universal adversarial patches on the fixed position of the face to prevent the face detectors from detecting the real faces. The main concern of these methods is to mainly focus on generating available adversarial perturbation patterns but without much consideration of the impact of patch’s position versus the attack performance.
On the other hand, some methods fix the content of the adversarial patch and search for the optimal pasting position within the valid pasting area of the face. Adv-sticker (Guo et al., 2021) uses patternfixed stickers existing in the real life, and changes their positions through RHDE algorithm based on the idea of differential evolution to attack FR systems. Inspired by this, we believe that the position and perturbations of the adversarial patch are equally important to attack the face recognition system, and if the two are optimized simultaneously, the attack performance can be further improved.
2.3 DEEP REINFORCEMENT LEARNING
Deep reinforcement learning (DRL) combines the perception ability of deep learning with the decision-making ability of reinforcement learning, so that the agent can make appropriate behaviors through the interaction with the environment (Li, 2017; Dong et al., 2019a). It receives the reward signal to evaluate the performance of an action taken through the agent without any supervisory information, and can be used to solve multiple tasks such as parameter optimization and computer vision (Li, 2017). In this paper, we use the information obtained by querying the target model to solve the attack parameters to generate transferable adversarial examples, which can be formalized as the process of using reward signals to guide the agent’s learning in the reinforcement learning. Therefore, an agent based on Unet (Ronneberger et al., 2015) is designed to learn parameter selection policies, and generate better attack parameters under the reward signals obtained by querying the target model.
3 METHODOLOGY
3.1 PROBLEM FORMULATION
In the face recognition task, given a clean face image x, the goal of the adversarial attack is to make the face recognition model predict a wrong identity of the perturbed face image xadv . Formally, the perturbed face with the adversarial patch can be formulated as Eq. (1), where is Hadamard product and x̃ is the adversarial patch with perturbations across the whole face image. A is a mask matrix to constrain the shape and pasting position of the patch, where the value of the pasting area is 1.
xadv = (1−A) x+A x̃ (1)
The previous methods either optimize x̃ with pre-fixed A, or fix x̃ to select the optimal A. In our method, we optimize A and x̃ simultaneously to further improve the attack performance. For the optimization of the mask matrix A, we fix the shape and the size (sh, sw) of the adversarial patch, and change its upper-left coordinates (cx, cy) to adjust the mask matrix, and the corresponding mask is defined as Ac. In order not to interfere with liveness detection module, we limit the pasting
position to the area that does not cover the facial features like eyes, mouth, etc. For the detailed process, please refer to Appendix A. Thus, in our method, the adversarial patch can be expressed as:
xadv = (1−Ac) x+Ac x̃ (2) To generate the perturbation content, the ensemble attack (Liu et al., 2016) based on MI-FGSM (Dong et al., 2018) is here used to generate the transferable patch. For the ensemble attack of n surrogate models, let ρi denote the weight of each surrogate model fi and denote the attack step size. Then taking the un-targeted attack (or dodging in the face recognition case) as an example, given the ground truth identity y, let fi(x, y) denote the confidence score that the model predicts a face image x as identity y, then x̃ can be computed by an iteration way. Let t denote the t-th iteration, then:
xadv = x̃t+1 = (1−Ac) x+Ac (x̃t + · sign (gt+1)) (3)
gt+1 = µ · gt + ∇xL (x̃t, y) ‖∇xL (x̃t, y)‖1
(4)
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) (5)
where ` (x̃t, y, fi) = 1− fi (x̃t, y) and ∑n i=1 ρi = 1. For the targeted attack (or impersonation in the face recognition case), given the target identity ŷ, ` can be simply replaced by fi (x̃t, ŷ).
Our attack goal is to simultaneously optimize both the patch position and the perturbation to generate good transferable adversarial patches to attack the target model. Therefore, the mask Ac, the attack step size in Eq.(3) and weights ρi in Eq.(5) are set as the learned parameters. To make the parameters more suitable for the target model, we adjust the parameters dynamically through a small number of queries on the target model. The details of solving these parameters are shown in Sec. 3.2.
3.2 ATTACKS BASED ON RL
3.2.1 FORMULATION OVERVIEW USING RL
The generation of the transferable adversarial patch on the surrogate model is guided by the information returned by querying the target model, and this process can be represented as the learning of the agent through the reward signal obtained by interacting with the environment in the reinforcement learning (Li, 2017). So an agent is constructed to learn the selection policy of the attack parameters.
In our method, the parameter values are defined as the actions generated by the agent under the guidance of the policy π, and at denotes the t-th action (i.e., the value of t-th parameter). The image feature input to the agent is defined as the state s, and the environment is the threat model F (·). The policy function πθ(a |s) with parameters θ is a rule used by the agent to decide what action to take, which can be formulated as a probability distribution of action a in the state s.
The reward reflects the performance of the currently generated adversarial patch on the target model, and the training goal of the agent is to learn good policies to maximize the reward signal. In face recognition, the goal of the dodging attack is to generate images that are as far away as possible from the ground-truth identity, while impersonation attacks want to generate images that are as similar as possible to the target identity. Thus, the reward function R is formalized as:
R =
{ ` ( xadv, y, F ) = 1− F ( xadv, y ) if dodging
` ( xadv, ŷ, F ) = F ( xadv, ŷ ) if impersonation (6)
In iterative training, the agent firstly predicts a set of parameters according to policy π, and then the adversarial patch based on the predicted parameters is generated. Finally the generated adversarial face image is input to the threat model to obtain the reward value. In this process, policy gradient algorithm (Sutton et al., 1999) is used to guide the agent’s policy update. After multiple pieces of training, the agent will generate actions that perform well on the threat model with a high probability.
3.2.2 DESIGN OF THE AGENT
The agent needs to learn the policies of the position, weights and attack step size. Considering the simultaneous solution of the position and other parameters, the design of the agent uses the U-net (Ronneberger et al., 2015) based structure, which can output the feature map of the same length and width as the image matrix fed into the network. Let the number of surrogate models be n, we design
the agent to output the feature map with n channels and the same length h and width w as the input image (i.e. the size is n× h× w). In each channel Mi(i = 1, . . . , n) of the feature map M , the relative value of each pixel point represents the importance of each position for the surrogate model fi, and the overall value of the channel reflects the importance of the corresponding surrogate model. We believe that the patch requires different attack strengths in different locations, so at the top layer of the agent network, a fully connected layer is used to map the feature map M to a vector V representing different attack step values. The structural details of the agent are shown in Figure 2.
Specifically, for the position action, the optional range of positions is discrete, so the position policy π1θ is designed to follow Categorical distribution (Murphy, 2012). Given the probability Pposition of each selected position, the position parameters (cx, cy) ∼ Cat (Pposition ), and Pposition is computed:
Pposition = 1
n n∑ i=1 softmax (Mi) (7)
For the weight actions, the weight ratio of the loss on each surrogate model fi to the ensemble loss is a continuous value, and we set the weight policy π2θ to follow Gaussian distribution (Murphy, 2012). So the i-th weight parameter ρi ∼ N ( µfi , σ 2 ) (i=1, . . . , n), and µfi is calculated as:
µfi = softmax ( M1,M2, . . . ,Mn ) i
(8)
where Mi refers to the mean value of the i-th channel in the feature map, and σ is a hyperparameter. In the actual sampling, we use the clipping operation to make the sum of weights equal to 1.
For the attack step action, we set 20 values in the range of 0.01 to 0.2 at intervals of 0.01, and adopt Categorical distribution (Murphy, 2012) as the step size policy π3θ due to the discreteness of the values. So the step size parameter ∼ Cat (pstep), and probability pstep of each candidate value is:
pstep = softmax (FC (Pposition)) (9)
By sampling from the corresponding distribution, we can obtain (cx, cy), ρi(i = 1, ..., n) and .
3.2.3 POLICY UPDATE
In the agent training, the goal is to make the agent hθ learn a good policy πθ with parameters θ to maximize the expectation of the reward R. Assume that there are T attack parameters to be solved, and τ = (s, a1, a2, . . . , aT ) is a decision result, then the optimal policy parameters θ∗ can be formulated as:
θ∗ = argmax θ J(θ) = argmax θ Eτ∼πθ(a|s)[R(τ)] (10)
We use the policy gradient Sutton et al. (1999) method to solve θ∗ by the gradient ascent method, and follow the REINFORCE algorithm (Williams, 1992), using the average value of N sampling of the policy function distribution to approximate the policy gradient∇θJ(θ):
∇θJ(θ) = Eτ∼πθ(a|s) [ T∑ t=1 ∇θ log πθ (at |s)R(τ) ] ≈ 1 N N∑ n=1 T∑ t=1 ∇θ log πθ (at |s)Rn (11)
Algorithm 1 Simultaneous optimization for position and perturbations of adversarial patch Input: Face image x, target ŷ, agent hθ, threat model F (·), n surrogate models fi(i=1, . . . , n),
iterations Z in MI-FGSM, sample times N , patch size sh, sw, learning rate α, training epoch K. Output: xadv
1: for k = 1 to K do 2: feature map M ← hθ; policies π1θ , π2θ , π3θ ← according to Eq. (7), Eq. (8), Eq. (9) ; 3: for i = 1 to N do 4: actions (cx, cy, ρ1, ρ2, . . . , ρn, )← sampling from π1θ , π2θ , π3θ ; A ← (cx, cy) and sh, sw ; 5: for t = 1 to Z do 6: gt ← according to Eq. (5) and Eq. (4) ; x̃t← according to Eq. (3); 7: end for 8: Update Ri← according to Eq. (6); 9: end for
10: ∇θJ(θ)← according to Eq. (11) ; θ← θ + α · ∇θJ(θ) ; 11: if F ( xadv ) = ŷ then break; 12: end for 13: return xadv
where Rn is the reward in the n-th sampling. When updating the policy with the parameter θ, the reward R can be regarded as the step size. The larger the reward, the larger the step size. If the reward is negative, it will go to the opposite direction. In this way, the agent can learn good policy functions with the update of θ in the direction of increasing the reward.
For actions that follow the Categorical policy (i.e., the position parameter and attack step size), given the probability vector p (i.e., Pposition and pstep in Eq. (7) and Eq. (9)), let p(a) denote the probability of action a in the probability vector p, then for π1θ and π 3 θ , ∇θ log πθ(a | s) in Eq. (11) can be calculated as:
∇θ log πθ(a | s) = d log p(a)
dθ (12)
For actions that follow the Gaussian policy distribution (i.e. the weights of the surrogate models), the mean value µf of Gaussian distribution is calculated by the output of the agent, so µf can be expressed as hθ(s) = µf . Therefore, for π2θ following the Gaussian policy, ∇θ log πθ(a | s) in Eq. (11) can be calculated as follows:
∇θ log πθ(a | s) = ∇θ [ − (a− hθ(s)) 2
2σ2 − log(σ)− log(
√ 2π) ] = a− hθ(s)
σ2 · dh dθ
(13)
3.3 OVERALL FRAMEWORK
The complete process of our method to simultaneously optimize the position and perturbation is given in Algorithm 1. In the K iterations of the agent training, policy functions are firstly calculated according to the output of the agent, and then perform N sampling according to the probability distribution of the policy function to generate N sets of parameters. According to each set of parameters, the attack is conducted on surrogate models and the generated transferable adversarial examples are used to query the target model to obtain the reward, and then the policy is updated. During this process, if a successful attack has been achieved, the iteration is stopped early.
Furthermore, our simultaneous optimization method can also be combined with other existing blackbox attack methods like gradient estimation to further enhance the attack performance. Detailed combination process and experimental results are shown in Appendix B.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETTINGS
Target models: We choose five face recognition models as threat models, including four representative open-source face recognition models (i.e. FaceNet (Schroff et al., 2015), CosFace50 (Wang et al., 2018), ArcFace34 and ArcFace50 (Deng et al., 2019)) and one commercial face recognition API 1. During the attack, the four open-source models are candidates for surrogate models, and each model and their ensemble version that excludes the target model will be used as surrogate models.
1https://intl.cloud.tencent.com/product/facerecognition
Face database: We randomly select 5,752 different people from Labeled Faces in the Wild (LFW)2 and CelebFaces Attribute (CelebA)3 to construct the face database. We use the above models to extract the face features, and then calculate the cosine similarity with all identities in the face database to perform the 1-to-N identification.
Metrics: Two metrics, attack success rate (ASR) and the number of queries (NQ) are used to evaluate the performance of attacks. ASR refers to the proportion of images that are successfully attacked in all test face images, where we ensure that the clean test images selected in the experiment can be correctly identified. NQ refers to the number of queries to the target model required by the adversarial patch that can achieve a successful attack.
Implementation: The size of the adversarial patch is set as sh = 25, sw = 30, the number of sampling N in the policy gradient method is set to 5, and the variance σ of the Gaussian policy is equal to 0.01. Other parameters are set as Z = 150, α = 0.001, and K = 50.
4.2 EXPERIMENTAL RESULTS
4.2.1 PERFORMANCE OF SIMULTANEOUS OPTIMIZATION
We first evaluate our simultaneous optimization method qualitatively and quantitatively according to the setting in Section 4.1. Table 1 shows the quantitative results of dodging and impersonation attacks under the black-box setting against the five target models. For surrogate models, we explore the relationships between individual models and the model ensemble using single models and their ensemble version excluding target models, respectively. Figure 3 shows some visual examples of the position and pattern at different stages of the attack. More visual results are shown in Appendix C.
From the results in Table 1, we can see : (1) the proposed method achieves good attack success rates and query efficiency under both two kinds of attacks. The dodging attack achieves the highest success rate of 96.65% under 11 queries, while the impersonation attack achieves the highest success rate of 72.83% under 27 queries. (2) The performance of using ensemble models to attack is better than that using a single model as the surrogate model, which shows that joint optimization can adaptively adjust
2http://vis-www.cs.umass.edu/lfw/ 3http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
Figure 4: The weights of each surrogate model in the ensemble attack using simultaneous optimization.
Figure 5: ASR and the NQ (shown in brackets) when using Position (P), Position and weights (P+W), and Position, weights and step size (P+W+S) respectively.
the weights of different surrogate models to achieve the best performance. (3) For the relationship between the surrogate model and the target model, when the two are structurally similar, it is more likely to help the attack. For example, for ArcFace50, CosFace50 has a greater influence on it. This is because the backbone of ArcFace50 and CosFace50 are both ResNet50-IR (Deng et al., 2019), and the backbone of FaceNet and ArcFace34 are Inception-ResNet-v1 (Szegedy et al., 2017) and ResNet34-IR (Deng et al., 2019) respectively. For the influence of model training loss, Facenet uses Euclidean (Schroff et al., 2015) rather than cosine space metric, which is less helpful to ArcFace and CosFace using angular space loss metric when used as a surrogate model alone, with ASR of dodging only 16.89% and 11.47%, respectively. Therefore, in ensemble attacks, we can dynamically adjust the importance of different models by optimizing their weights. (4) On the commercial API, performance drops slightly compared to the open-source model, but remains at an acceptable level. This is because commercial FR services introduce some defense measures like image compression.
4.2.2 COMPARISONS WITH SOTA METHODS
To prove the superiority, we compare our method with three methods: simply relying on the transferability, fixing the perturbation and only changing the position, and randomly initializing the position and only optimizing the perturbation. Specifically, for the method relying solely on the transferability, we use TI-MI-FGSM (Dong et al., 2019b) to generate adversarial examples on the surrogate model, and then directly transfer them to the target model to test their performance. For the position-only method, we adopt the latest RHDE (Guo et al., 2021) method, which fixes the content of the patch and searches for a good position on the face to attack. For the perturbation-only method, we first randomly initialize the position, and then use ZO-AdaMM (Chen et al., 2019) to obtain the adversarial perturbation. The above results on three target models are shown in Table 2.
From the results, we can see that: (1) Although it is relatively simple to rely solely on transferability, the average attack success rate is 12.09% which is not satisfactory. (2) The average success rates for RHDE and ZO-AdaMM are 38.97% and 46.85%, respectively, while our method is 65.61%, which proves that compared with the methods optimized only for position or perturbation, our joint optimization can combine these two attack modes more effectively to achieve better attack. (3) Our method achieves optimal query efficiency among several methods, requiring only a few dozen queries.
4.2.3 ABLATION STUDY
In order to verify the effectiveness of each attack parameter used in our method, we also report the results when each component is added separately. First, we fix weights as equal values and the step size as 0.1 to test the performance when only changing the position parameter. Then, we add the weight parameters to learn the importance of each surrogate model. Finally, we add the step size parameters to carry out the overall learning. The results in Figure 5 show that learning only
Table 3: Success rate of dodging (D) and impersonation (I) attack in the physical environment.
Table 4: Success rate in the physical world when changing distance, lighting and face postures including frontal (0◦), yaw angle rotation ±25◦, ±45◦, and pitch angle rotation ±15◦.
the position parameter can achieve an average success rate of 69.80%. The performance is greatly improved after adding the weight, and further improved after adding the step size. All parameters contribute to the enhancement of the overall attack effect and the weight parameter has a greater impact on the results than the step size. In the process of adding components, the number of queries (shown in brackets) is basically maintained at the same level.
We also make statistics on the weights of each model using simultaneous optimization by taking all four models as surrogate models, and Figure 4 shows the results on a face image when attacking the four threat models. It can be seen that its own model has the highest weight, which indicates that our method can find surrogate models similar to the target model in learning, and the weight of the other three models presents a positive correlation with the ASR shown in Table 1.
4.2.4 ATTACKS IN THE PHYSICAL WORLD
In this section, we show the results of our adversarial patch in the physical environment. We first perform simulated successful attacks on different subjects in the digital environment, and then conduct experiments in the physical world. The technical details of the implementation are shown in Appendix D. We record the video when faces are moving within the range of 5◦ of the current posture, and count the frame proportion of successful physical attack when results are checked every 10 frames at a frame rate of 30fps within one minute as the attack success rate. Table 3 shows the results of the frontal face. To test the performance under various physical conditions, we further change the face posture, the distance from the camera and the illumination. For face postures, we take the conditions of the frontal face, yaw angle rotation of ±25◦ (the mean value at 25◦ and −25◦), ±45◦ and pitch angle rotation of ±15◦. These results are shown in Table 4. It can be seen from Table 3 that our method maintains high physical ASR (100.0% and 83.33%) in both dodging and impersonation attacks on FaceNet, and the results are also good on ArcFace34 and CosFace50. Although there is a slight decline in commercial API, the ratio of 33.16% and 25.23% of successful frames is enough to bring potential risks to commercial applications. In Table 4, when the pose changes in a small range (± y25◦, ± p15◦), ASR still maintains a high value. Even if the deflection angle is slightly larger (± y45◦), it can still maintain an average of 31.13%. The effect of distance is small, and when lighting is changed, the results are still at an acceptable level. When performing impersonation attacks under different conditions, there is a situation where the face is recognized as a false identity different from the true identity, but not the target identity, which leads to a slightly lower result to a certain extent. A set of visual results is shown in Figure 6.
5 CONCLUSION
In this paper, we proposed a method to achieve the simultaneous optimization of the position and content to create more transferable adversarial patches. The content was generated by ensemble model attack, and the position, model weights and attack step size are set as parameters. These parameters are dynamically adjusted through a small number of queries with the target model in a reinforcement learning framework to make the patch more suitable for the target model. Extensive experiments demonstrated that our method can effectively improve the attack success rate and the query efficiency, and experiments on commercial face recognition API and physical environments confirmed the practical and effective application value of our method. Besides, the proposed method can also be adapted to other applications, such as automatic driving, etc.
A OPTIONAL AREA OF THE PASTING POSITIONS
In this section, we give more details about changing the pasting position of the adversarial patch.
It is worth noting that in order not to interfere with the liveness detection module and to maintain the concealment of the attack method, the area that does not cover the facial features of the face (e.g. cheek and forehead) is regarded as the optional pasting area. In practical applications, the liveness detection module is often used in combination with the face recognition to confirm the real physiological characteristic of the object and exclude the form of replacing real faces with photos, masks, etc. (Ming et al., 2020). It is mainly based on the depth or texture characteristics of the face skin or the movements of the object (such as blinking and opening the mouth), so the patch cannot be pasted in the area that covers the facial features (such as eyes and mouth).
Specifically, we first use dlib library to extract 81 feature points of the face and determine the effective pasting region. Figure 7 shows some examples of the effective pasting area corresponding to the face. After calculating the probability of each position Pposition through the output of the agent (see Eq. (7) in the paper), we set the probabilities of the invalid positions to 0, and then sample the pasting position.
B INTEGRATION WITH GRADIENT ESTIMATION
Our simultaneous optimization method can also be combined with the gradient estimation (e.g., Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017), natural evolution strategy (Ilyas et al., 2018), random gradient estimation (Tu et al., 2019)). It can provide a good initial position and pattern for gradient estimation to further enhance the attack performance. Here we take Zeroth-Order (ZO) optimization (Chen et al., 2019; 2017) as an example to describe the calculation method and the experiment.
B.1 TECHNICAL DETAILS
In the Zeroth-Order (ZO) optimization, to expand the optimization range, x is often replaced with 1+tanhφ
2 (Chen et al., 2017). Since the symmetric difference quotient (Lax & Terrell, 2014) is used at φ to add a small offset at φ in the gradient estimation process, the gradient of x at φ (i.e. ∇φx) can not be too small. Therefore,∇φx is calculated as follows:
∇φx = 1
2
( 1− tanh2(φ) ) (14)
Therefore, when combining, in addition to considering the attack target, it is also necessary to ensure that the gradient of the generated perturbation in the φ-space meets the above requirements. So the loss function L(·, ·) in Eq. (5) is modified as:
L (x̃t, y) = n∑ i=1 ρi · ` (x̃t, y, fi) + β sh · sw sh∑ i=1 sw∑ j=1 ∇φijx (15)
where sh and sw represent the height and width of the perturbation patch pasted to the face, and β is the scale factor. Given the paste coordinates (cx, cy), then φij = arctanh ( 2 · x̃t(cy+i,cx+j) − 1 ) .
Through the simultaneous optimization, we obtain an adversarial patch with good position and perturbation. Next, we regard it as the initialization of the gradient estimation. Specifically, we fix the position and use the gradient estimation method to only refine the perturbation by querying the threat models. After several iterations, the transferability of the adversarial patch is further improved.
B.2 EXPERIMENTAL RESULTS
The results in Table 5 show the performance of the integration with gradient estimation. It can be seen that after the combination of gradient estimation, the results have been improved to some extent compared with Table 1, but the consumption of queries has increased accordingly. For example, the success rate of impersonation attacks on CosFace50 increases from 40.08% to 71.28%, but the number of queries increases from 77 to 1575. Therefore, in practical applications, it is necessary to weigh the importance of success rate and query efficiency to determine whether to combine the gradient estimation method.
In order to confirm that the improvement in ASR is due to the good initial value provided by our simultaneous optimization method for gradient estimation, rather than the superiority of the gradient estimation method itself, a comparison is made in Table 6. The results show that simultaneous optimization (SO) can achieve a good ASR with the help of only a few queries (lower than 40 queries) both in dodging and impersonation. When combined with Zeroth-Order (ZO) optimization, the ASR has been further improved (from 72.83% to 87.37% for impersonation), but introduces more query consumption. However, we can see that Random+ZO uses much more queries (1972 and 2874, respectively) but achieves much lower ASR (65.79% and 50.52%, respectively). This contrast proves that our SO method indeed provides a good initialization for the gradient estimation.
C MORE VISUAL RESULTS
In this section, we show more visual results of the simultaneous optimization (SO) attacks. Figure 8 shows four groups of visual results. For each group of three images, the first one represents the clean image, the second one represents the image after the attack, and the third one represents the image corresponding to the wrong identity in the face database.
The above results are obtained by ensuring that the patch is not pasted to the area that covers the facial features. We also conduct experiments when allowing patches to cover facial features in a small range (no more than 20 pixels), and the results are shown in Figure 9. Interestingly, we find that the generated pattern is similar to that of the facial features at the corresponding position, but its
shape has been changed. For example, in the last group of pictures in Figure 9, the adversarial patch is pasted on the left side eyebrow of the face, and the generated pattern is similar to the shape of the eyebrow, which changes the eyebrow’s shape, and may mislead the extraction of eyebrow features by the face recognition network. This inspires us to use some information of facial features (e.g. eyes, eyebrows, nose and mouth) to mislead the feature extraction process of the face recognition, so as to achieve better attack performance.
D IMPLEMENTATION DETAILS OF THE PHYSICAL ATTACK
To make digital simulation results better adapt to the physical environment, we process the smoothness of the pattern. Specifically, during each iteration of the pattern generation, we first obtain a pattern half the size of the original patch by scaling down or averaging pooling, and then enlarge the image back to the original size by bilinear interpolation. The reduced pattern retains the key information of the pattern. After zooming, the smoothness of the image is improved, avoiding the problem that the pattern generated per pixel is sensitive to the position point when it is transferred to the physical environment. Even the pasting position is slightly a few pixels away, the attack effect can still be preserved. We also try to use Total Variation (TV) (Sharif et al., 2019) loss to enhance the smoothness, but the actual effect is not as good as the scaling process. When printing, we use photo paper rather than ordinary paper as the patch material to recreate the colors of the digital simulation as realistically as possible. | 1. What is the main contribution of the paper in terms of attacking face recognition models?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the novelty and limitations of the paper's contributions?
4. Are there any concerns regarding the paper's claims and comparisons with other works?
5. What are the suggestions for improving the paper's content and experimental results? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a RL-based optimization of patch position and perturbation for attacking face recognition models. With a RL strategy, they can attack with improved attack success rate and have better transferability on black-box models.
Review
Pros:
The idea of optimizing location is somewhat new.
As seen from the results, the attack success rates of the proposed algorithm is better, as well as the transferability on black-box models.
Cons:
The written is not clear. As for the mask generation part, the description of learning location is blurred. As said, "In each channel Mi(i = 1, ..., n) of the feature map M, the relative value of each pixel point represents the importance of each position", literally it is not learning to "locate", it is rather generation a mask region. Besides, patch region should be a squre-shape region, and how does this algorithm restrain its shape? I'd suggest taking a look at STN [1], the real learning location paper.
Some patch-based attack on face recognition algorithms are missing [2], which make me hard to evaluate its contribution.
The contributions are limited. What's the difference of directly generating the adversarial patch and optimization a mask as well as its adversarial perturbation? Putting the RL-module aside, I do not see any novel module or loss constraints to boost the performance.
Patch adversarial attacks are mostly for attacking real-world systems. I'd suggest that more experiments should be executed including the comparions with SOTA methods of real-world real-time face detection models.
It is not correct to claim that "They belong to white-box attack against image classification task, while our method is a black-box attack versus face recognition task." The algorithm is not a black-box attack as it is required to compute gradient using MI-FGSM.
[1] Spatial Transformer Networks. https://arxiv.org/pdf/1506.02025.pdf
[2] Improving Transferability of Adversarial Patches on Face Recognition with Generative Models. https://openaccess.thecvf.com/content/CVPR2021/papers/Xiao_Improving_Transferability_of_Adversarial_Patches_on_Face_Recognition_With_Generative_CVPR_2021_paper.pdf |
ICLR | Title
Mapping Language Models to Grounded Conceptual Spaces
Abstract
A fundamental criticism of text-only language models (LMs) is their lack of grounding—that is, the ability to tie a word for which they have learned a representation to its referent in the non-linguistic world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual structure of language, as demonstrated by their ability to answer questions, generate fluent text, or make inferences about entities, objects, and properties that they have never physically observed. In this work we investigate the extent to which the rich conceptual structure that LMs learn indeed reflects the conceptual structure of the non-linguistic world—which is something that LMs have never observed. We do this by testing whether the LMs can learn to map an entire conceptual domain (e.g., direction or colour) onto a grounded world representation given only a small number of examples. For example, we show a model what the word “left” means using a textual depiction of a grid world, and assess how well it can generalise to related concepts, for example, the word “right”, in a similar grid world. We investigate a range of generative language models of varying sizes (including GPT-2 and GPT-3), and see that although the smaller models struggle to perform this mapping, the largest model can not only learn to ground the concepts that it is explicitly taught, but appears to generalise to several instances of unseen concepts as well. Our results suggest an alternative means of building grounded language models: rather than learning grounded representations “from scratch”, it is possible that large text-only models learn a sufficiently rich conceptual structure that could allow them to be grounded in a data-efficient way.
1 INTRODUCTION
Large pre-trained language models (LMs) trained on text corpora have shown remarkable progress on a range of natural language understanding tasks (Radford et al., 2019; Brown et al., 2020). Such models have demonstrated their ability to generate fluent dialogue (Brown et al., 2020), make commonsense inferences Zellers et al. (2019), and reconstruct taxonomies and word relations (Chen et al., 2020). However, it has been argued that true meaning cannot be learned from the form of language alone (i.e., from text) because it is a word’s use in the non-linguistic world that imparts it meaning (Bender & Koller, 2020; Bisk et al., 2020). For example, although LMs might learn from textual co-occurrences that the words north and south are opposites, the grounded meaning of these words, i.e., the direction that you should travel if you are told to go north, is something to which these models, by definition, do not have access during training.
While it is indisputable that text-only models do not learn representations of concepts that are grounded in the non-text world, it is possible for the structure of relations between concepts in text form to be identical to what a grounded model would learn. In principle, it is therefore possible for a text-only model’s conceptual space to be isomorphic to the “true” (i.e., grounded) conceptual space (Merrill et al., 2021). In this work, we investigate whether this is the case by asking whether models can learn to ground an entire domain (e.g., direction) after grounding only a subset of the points in that domain (e.g., left). Specifically, for generative LMs that have been trained only on large text corpora, we “orient” the models by showing them how some word forms (that they have learned during training) are used in simple text worlds—for example, what the direction north maps
to in a textual representation of a grid world (see Figure 1). We then evaluate two types of generalisation. First (§3.1), we evaluate generalisation to unseen worlds. For example, if the model has seen several realisations of the word north in different grid worlds, can it correctly identify north in an unseen world (e.g., one of a different size or shape)? Second (§3.2), we evaluate generalisation to unseen but related concepts. For example, if the model has been shown grounded representations of north and east, can it correctly identify south and west, even though it was never shown them? We find that although the small language models (GPT-2 models that contain on the order of 100M parameters) cannot perform either generalisation well, the largest model (a GPT-3 model containing 175B parameters) can indeed learn groundings in the conceptual worlds we build. We analyse the predictions and errors made by models (§3.3) and find that the errors made by the small models are often due to a failure to recognize the domain, and thus generating random words by default. In contrast, the errors made by the largest model are often intuitive, e.g., predicting in-domain concepts that are reasonable substitutes for the target (e.g., maroon versus dark red).
2 EXPERIMENTAL DESIGN
2.1 MODELS
We test five autoregressive Transformer language models (Vaswani et al., 2017) of varying size, specifically the GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) models. Our smallest model contains 124M parameters, and the others follow increasing model sizes (355M, 774M, 1.5B and 175B parameters). All models are pre-trained on differently filtered versions of the OpenAI Web-Text dataset Radford et al. (2019), composed of 40GB of English web text available on the internet. We generate up to 5 tokens per prompt and, to improve the robustness of our analyses, generate 3 samples per prompt. We use a temperature of 1 during generation and sample from the softmax probabilities produced at each time step using nucleus sampling (Holtzman et al., 2019) with p=0.85. We include more detail on the models and their training data in Appendix A.
2.2 IN-CONTEXT LEARNING
Several studies (Brown et al., 2020; Reynolds & McDonell, 2021) have shown that instead of finetuning generative LMs–i.e., a process that updates parameters learned by the model during pretraining–it is possible to achieve competitive performance by giving the model a small number of training examples within the prompt. This is often referred to as “in-context learning” or “fewshot prompting”. Specifically, a prompt includes n task examples that include a question prefix (e.g., “World:”) followed by the question, and an answer prefix (e.g., “Answer:”) followed by the answer to the question. After giving the model n examples in this manner, the prompt ends with a new question and only an answer prefix after which the model is expected to generate an answer to the last question, following the prompt format it has seen. By enumerating over all questions in the test set, we can obtain a model-generated answer for every test set question that we wish to evaluate. There are no gradient updates to any model parameters using this approach.
2.3 GROUNDED CONCEPT DOMAINS
The models that we test, by construction, can receive only text inputs. Thus, we focus on a set of grounded domains for which it is possible to faithfully represent the grounded meaning in text form. We briefly describe these domains below, summarise them in Figure 1, and describe in detail, the prompt creation process for each generalisation task in Sections §3.2 and §3.1. We discuss the possibility of expanding this set of concepts in future work in Section §4.
Spatial Terms We consider 6 spatial concepts: left, right, up, down, top, bottom. Each of the above concepts can be represented in a grid world using the position of a special character (here, a ‘1’) in the world. To do this, we create grid world environments of varying sizes (where the number of rows and columns ranges from 1 to 8), where each world consists of ‘0’s and a single ‘1’.
Cardinal Directions We consider eight cardinal directions: north, south, east, west, northeast, northwest, southeast, southwest. These are similar to spatial terms, except that they include compo-
Published as a conference paper at ICLR 2022
FINAL
sitional terms (e.g., northeast). We use grid-worlds of the same format as spatial terms to represent these concepts.
Colour Terms We consider colour terms in a three-dimensional space, using a dataset of 367 RGB colours (Abdou et al., 2021) that contains colour names (e.g., red, cyan, forest green) each associated with an RGB code (e.g., (255, 0, 0)). Therefore, the world representation in this case is not a grid-world, but an RGB code associated with every colour name. Figure 1 shows example RGB codes and colours that serve as part of a prompt given to the models we test.
2.4 ROTATED WORLDS TO CONTROL FOR MEMORISATION
Motivation The GPT-x models that we use have been trained on the CommonCrawl corpus (Radford et al. (2019)), a collection of documents that contains web text freely available on the internet. Since we provide instantiations of grounded concepts in text form, it is very plausible that the domains described above have been encountered verbatim during training. For example, for spatial terms such as left, a model might have seen instances of matrices and linear algebra terms with the word left in close proximity; for colour terms, tables that map RGB colour codes to colour names are pervasive in web-text. We therefore include a control task in our experimental setup such that the model cannot succeed using simple memorisation. Rather, success on a task requires the model to truly perform a conceptual mapping between the ungrounded and grounded representations.
Isomorphism as a Control We use the concept of isomorphism to control for memorisation. Intuitively, imagine a situation where you are lost in the woods. Once pointed in the direction of north, you instantly know which way is south. However, this ability is not dependent on having been correctly pointed north–if someone were to incorrectly point east and tell you this was north, you would readily infer west to be south. This is because your reasoning depends on your knowledge of the relation between north and south, and between north and the world, rather than on having memorized the “true” grounding of each concept independently.
By the same logic, if a model is learning a grounding function, this should be dependent on the world in which it is being grounded. For example, in different worlds where the word red grounds to different points in space, the word blue, by analogy, shares a fixed conceptual relation to the word red. Therefore, it should ground in correspondingly equidistant ways in the two worlds. A model’s ability to learn two different grounding functions f vs. g, should not be dependent on what the actual points ground to, as long as the structural relations between concepts in the space are preserved. Further, this should hold for all such isomorphic transformations that preserve the structure of the space, and importantly, it should not hold for random perturbations that distort the structure of the space, since such distortions would break the assumption that the relation between red and blue in ungrounded space is analogous to that in grounded space.
Implementation In the colour domain, since colour concepts exist in a 3D world of RGB codes, we rotate each point around a fixed axis by a certain degree to create a new isomorphic world. We repeat this control three times (for 90◦, 180◦ and 270◦ rotations) and average over the rotations in our evaluations. For cardinal directions, we rotate each cardinal concept by 90◦ in two dimensions, and do this three times and average over rotations. Since the spatial terms exist as pairs, we simply
swap the groundings of alternate terms (e.g., left and right). For random worlds that do not preserve the structure between word forms, we randomly assign a concept name (e.g., red) to a point in the world, and we do this for all concept names and categories to obtain a random world for each. Figure 2 shows example transformations of colours on rotating by 90◦, as well as randomly rotating points. Isomorphism and Rotated Worlds
FINAL
FINAL
2.5 EVALUATIONS
Experimental Logic We report model performance in three settings: the original (“true”) world (e.g., that in which red maps to the actual RGB code for red), an average over three rotated worlds (e.g., worlds in which red maps to some other RGB code, but relations between colors are preserved), and a random world (in which relations between mappings are not preserved). If a model is performing the conceptual mapping in the desired way, we expect that performance should be high in the true world and that there should be no significant degradation in performance when moving to rotated worlds. We also expect that performance should be low in the random world.
Metrics When given a prompt, a generative LM is free to generate any number of tokens until having generated the EOS token that halts further generation. Since classification tasks usually correspond to labels that contain only a few words, the standard approach is to let the model generate an entire sequence of text and to then cut off the generation to the first n tokens (where typically n < 10). From the prompting mechanism, the model should learn to follow the prompt format to generate one label (e.g., instead of every label in succession), and since it does not receive any gradient updates during this learning, there is no incentive for it to predict all related labels within a n-token generation. We see that this is true in practice and show example generations in Appendix E. We set n = 5 and then report the following metrics.
TOP-1 ACCURACY If the ground-truth answer or any substring thereof lies in the generated answer (i.e., the first n tokens of the full generation), the model gets perfect accuracy. If the ground-truth answer does not exist in the generated answer, or exists in the generation outside of the cutoff limit, the model gets a 0 accuracy. For example, for the ground truth answer deep tuscan red, if the model generated answer is tuscan red or red the model gets a perfect accuracy, but if the model generated answer is deep red or wine or vermilion, the model gets an accuracy of 0. We also compute the same metric using exact match (i.e., the model is only correct if it generates exactly deep tuscan red). We find the values are lower but the trends are the same; see Appendix ??..
TOP-3 ACCURACY This metric is analogous to Top-1 except that instead of only considering the most probable generation from the model, we collect the second and third most probable answer sequences as well. The model gets a perfect accuracy if the correct answer exists in any of the three generated answers. If the correct answer does not exist in any of the 3 generated answers, or exists in any of the generations outside of the cutoff limit, the model gets an accuracy of 0. Again, see Appendix ?? for results using exact match.
GROUNDING DISTANCE For analysis purposes (§3.3), we wish to assess how far off models are in cases where they are wrong. For this, we need to quantify the distance between the model’s predicted answer and the ground truth answer. For example, in the colour domain, the distance between two points can be computed as the Euclidean distance between two RGB codes. For every answer generated by the model (e.g., the word pink in the colour domain), if the answer is an acceptable grounded term that exists in the world, we can compute its distance to the true grounding (e.g., the colour red). However, if the generated answer was an unrelated word (e.g., the word cat) that does not exist in the same domain, no distance can be computed in the world. Therefore, we calculate a distance metric as the Euclidean distance between two points in space when the generated answer falls in the domain. When the generated answer does not fall in the domain, we set the distance to a number significantly higher than the largest distance between two in-domain concepts. We provide equations and details on calculation of this metric in Appendix C.
Baselines Given that the language models are free to generate any word that exists in their vocabulary as a potential answer, we choose two random baselines over vocabulary words against which to compare model performance. We use R-IV (i.e., random in-vocabulary) to denote a baseline that randomly selects from among all words in the model’s vocabulary. We use R-ID (i.e., random indomain) to denote a baseline that randomly selects from amongst only the in-domain words (e.g., from colour terms); this is 6, 8 and 367 words respectively for the spatial, cardinal and colour categories. Note that the generative LMs do not have such a domain restriction over words, as they are free to choose any token from the full vocabulary (like R-IV).
3 RESULTS
3.1 GENERALISATION TO UNSEEN WORLDS
Our first investigation looks into how well models generalise known concepts to unseen worlds. For example, if a model has seen a few examples of a concept (such as left) depicted in some grid worlds, can it correctly identify an instance of tleft in a different grid world (e.g., one with a different size or orientation)? Note that we can only conduct this type of evaluation in the spatial and cardinal domains, since, for the colour domain, there is only ever one world, with one grounding for each concept (e.g., the colour red has exactly one grounding to a 3-digit RGB code in a 3-D RGB space).
Data We create prompts that include 20 examples of grounded concepts in a set of grid worlds. For each domain (e.g., cardinal directions that contain 8 concepts, and spatial terms that contain 3 pairs of 2 concepts), we include a (roughly) equal sample of concepts among these 20 examples. Then, we append a held-out grid world to the end of the prompt and evaluate whether or not the models generate the correct concept label for these unseen worlds. Figure 3 shows example prompts
given to the model and example generations from three different models. We report results averaged over all generations in Table 1. In general, we see the desired trend in which model performance on the original and rotated worlds is well above that on the random world. We do not see consistent or significant performance degradation when moving from the original to the rotated world, suggesting performance is not due to simple memorisation. Comparing across models, we see that the smaller models struggle to even learn concepts that were taught to them in a few-shot manner. For example, for the spatial category, the performance of the smallest model is below that of a baseline which guesses randomly among in-domain words, suggesting the model even fails to learn the general domain of the task (discussed more in Section 3.3). In contrast, the largest model (GPT-3) has a 45% Top-1 accuracy and a 76% Top-3 accuracy for the spatial category.
3.2 GENERALIZATION TO UNSEEN CONCEPTS
Our primary interest here is in the model’s ability to map its ungrounded conceptual structure to a grounded world. Specifically, we want to see that the model is able to ground an entire conceptual domain when only taught how to ground a small subset of the points in that domain. To test this, we show models example concepts within a sub-space of the domain (e.g., north, east), while holding out other concepts (e.g., south, west). We then test them on instances of held-out concepts. Figure 6 depicts this setup in the colour domain: we train the model using primarily shades of red, but test on shades of blue.
Data To create sub-spaces for the colour domain, we create prompts that contain 3 primary, 3 secondary colours, and 57 other colours within a sub-space, determined in the following way. For a certain colour (e.g., red) we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour. We create 6 such sub-spaces, centered around each of the primary and secondary colours, and report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. For the cardinal directions, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in an entirely different sub-space (e.g., south, west, southwest). The training split therefore contains worlds annotated with one sub-space, and we test on the remaining held-out concepts. We do this for all sub-spaces and average over them.
We report results on all concept categories in Table 2. As before, we see that models do not appear to be exploiting simple memorisation (evidenced by similar performance in original vs. rotated worlds) and that only the largest models appear capable of getting reasonable performance on the task. That said, the largest GPT-3 model achieves impressive results given the difficulty of the task. For example, it achieves over 40% Top-1 accuracy in the color domain, which requires generating a label like violet despite having never seen such a label during training (Figure 4).
3.3 ERROR ANALYSIS
Given the results above, we investigate “how wrong” models are when they fail to produce the expected ground-truth label. One clear trend of the large model is the ability to generate “on topic” (i.e., in-domain) responses, regardless of whether or not those responses are correct. For example, if the expected response to an input is “left”, a model which produces right is wrong in a very different way than a model that produces nonsensical outputs such as “[0[0” or function words such as “the” (see Figure 3). We see that the largest 175B parameter model almost always produces in-domain answers. We evaluate this by checking whether the generation lies in the set of related words for that category (e.g., all colours, spatial, or cardinal words, respectively). We see that the smaller models fail to do this. That is, their generations tend to be unrelated words that might have had high prominence (e.g., function words). On evaluating accuracy of generations being “in-domain”,
we see that the smallest model has only a 53% accuracy while the largest has a 98% accuracy of generating in-domain answers.
Second, we evaluate grounded distance (§2.5) to measure the degree of correctness of responses. In the colour domain, the colours dark red and wine are close enough in space that they might be intuitive alternate answers for one another. However, our top-1 and top-3 metrics only assess string matches and do not account for this. Thus, we look at the distance between the colour denoted by the predicted label (when it exists, i.e., when the model generated a legitimate colour name) and the colour the model was asked to label. The lower this distance is, the “less wrong” the model’s prediction is.
Table 3 reports evaluations measured by grounding distance for the colour domain. For every test instance, we compute the distance between the model predicted grounding and the true grounding as defined in Equation C.3. We then average over all computed distances to report one number that tells us how close, on average, model predictions are to the true grounding in the world. We show example visualisations of colours in RGB space that lie within a certain distance threshold of each other. We see, for the largest model, the distances between model-predicted groundings from the true grounding are significantly lower than random. Such a result suggests that the model’s errors are often justifiable and the scores given by our string-matching metrics might be an underestimate of the model’s true performance.
4 DISCUSSION
Our empirical results suggest that very large LMs (specifically, GPT-3), even when trained only on text, learn a conceptual space that can, at least in some settings, be “grounded” using only a small number of training examples. The fact that these models succeed even in isomorphic rotated worlds suggests that these models are not succeeding via naive memorisation. Rather, this suggests that they may be exploiting something about the conceptual structure of the space learned from text in order to map onto a new space that was not explicitly encountered during training.
A major limitation of our approach is that there are some grounded concepts (e.g., visual and sensory inputs) that cannot be easily encoded in text form. By construction, the LMs that we use are restricted to text-only inputs, thus our focus is on domains (e.g., colours and directions) that have a well-defined textual representation. This is only a small set of all the potential grounded concepts we would wish to teach LMs. Although many forms of data can be coerced into text format (e.g., we represent color using discrete digits to represent RGB space), complex concepts may loose fundamental aspects of their meaning when represented in this way. For example, for color, a coarse-grained notion of numeric proximity, derivable from text (Wallace et al., 2019; Naik et al., 2019), may be sufficient to differentiate the concepts we explore, but for more complex visual inputs (e.g., the output of a CNN image encoder), a text-based numeric representation is unlikely to capture the necessary degree of nuance. Future work would need to consider ways of adapting GPT-3-like models to accept non-textual inputs while still exploiting the text-based conceptual structure.
If such limitations were addressed, our results are suggestive of a potentially promising way in which text-only training could support general purpose, grounded models of meaning. Specifically, our results imply that the conceptual space a model learns from text might be nearly isomorphic to
what it would learn from interacting in a grounded world, and that models can be taught to map between those conceptual spaces without requiring explicit grounding for every concept. This is exciting, as domain-general text corpora are readily available, while domain general multimodal corpora–e.g., containing sufficient information on abstract concepts such as emotions (happy) or time (used to)–might be difficult or impossible to collect. If models like GPT-3 could be adapted to receive non-text prompts (as discussed above), our results suggest that the rich conceptual structure such models learn from text could be bootstrapped into powerful grounded models of language.
5 RELATED WORK
There is a significant amount of work that focuses on understanding how LMs represent and reason about concepts, as well as work that directly attempts to build models that take text inputs and ground them to elements in the world. We situate our work within these two bodies of literature: one that investigates how LMs understand linguistic phenomena and word meaning, and another, that attempts to situate language in models of the world. We describe each body of work below.
Meaning and Understanding in LMs With the advent of large LMs of increasing orders of magnitude, there has been speculation on the capabilities of such models, and whether they truly understand the meaning of the words they are learning representations for. Several works that attempt to probe linguistic phenomena in LMs, show that the representations learned by such models encode syntactic dependencies and coreference information (Tenney et al., 2019) and word-sense information (Chen et al., 2020). Work that investigates the ability of models to form word associations (Hwang et al., 2020), finds that large LMs can indeed perform such a task; suggesting that pretrained LMs not only recognize that entities are related, but can differentiate how they are related. Especially relevant to our work is recent work that investigates alignment of language models to colours (Abdou et al., 2021) or state changes (Li et al., 2021). Our work is complementary to this prior work, and we specifically ask whether the text space can be reliably mapped onto the grounded space.
Natural Language Grounding There is an increasing amount of work, usually at the intersection of NLP and fields like vision and reinforcement learning, that aims to use natural language to instruct agents about aspects of the world. In the case of vision, this could be to learn correspondences between language descriptions and pixels in an image (Eichenberg et al., 2021; Tsimpoukelli et al., 2021), or in the case of RL, to build agents that understand natural language instructions in order to take actions in a world that follow the instruction—for example to navigate to a goal (Artzi & Zettlemoyer, 2013; Patel et al.), or to solve tasks in different languages (Ku et al., 2020). Most of these tasks focus on training LMs from scratch, however usually with inputs that contain both textual information as well as grounded world information. Our work is different in that we attempt to take an LM that was previously trained only on text, and attempt to teach it a concept in the world without re-training it. With only a few samples of what the concept grounds to, we investigate how well large LMs can use the structure of language and associations between word forms in order to generalise to grounded concepts.
6 CONCLUSION
This work investigates the extent to which large language models, trained only on text can be taught to map previously learned word forms onto conceptual worlds. We investigate several generative language models in colour and direction domains, represented as discrete grid worlds given to the model as text input. With only a few examples of such grounded instances, although smaller models struggle to generalise from text to grounded concepts, we see that the largest model does indeed learn the space of concepts that we test it on. We analyse where models fail and see that the smallest models often produce random, unrelated outputs, but the errors made by the largest model are quite intuitive, for example, predicting colours very close in space to the true grounding. We discuss the limitations of focusing on text-only inputs to teach models grounded concepts, as well as the implications of such work for allowing large language models to be mapped, in a data-efficient way, to the grounded world.
APPENDIX
We provide, as supplementary material, additional information about the data generation, models used, examples of model generations, as well as additional results across all models.
A MODELLING DETAILS
We use a GPT-3 model Brown et al. (2020) and four GPT-2 Radford et al. (2019) models from the Hugging Face Transformer Wolf et al. (2019) library. Each of these is a pretrained autoregressive transformer model, trained on the OpenAI WebText corpus, containing around 8 million documents. The top 15 domains by volume in WebText are: Google, Archive, Blogspot, GitHub, NYTimes, Wordpress, Washington Post, Wikia, BBC, The Guardian, eBay, Pastebin, CNN, Yahoo!, and the Huffington Post. Individual model parameters and layers are shown in Table 5. The pretrained models use byte-pair encoding (BPE) tokens Sennrich et al. (2015) to represent frequent symbol sequences in the text, and this tokenisation is performed on all new input prompts to generate text from the model. We report the hyperparameters used by the pretrained model in Table 6.
Parameters Layers
Hyperparameter Selection
B DATA GENERATION
In this section we describe how we create prompt data for the concept categories we wish to evaluate. We describe what the “worlds” look like for each category, as well as detail on the generation of such worlds and division into train and test splits, for colours in Section B.1 and for both spatial and cardinal concepts in Section B.2.
B.1 COLOUR CONCEPTS
We draw from an existing dataset containing RGB codes associated with colour names, for 367 colours. In this concept category, a grounded realisation of a colour in the world, is simply its RGB code i.e., the point that it grounds to in 3-dimensional colour space. Therefore, all grounded examples in prompts follow the format of RGB: (x, y, z) followed by Colour: concept name , where the items in red are replaced with instances of RGB codes and names from the dataset of colours we useAbdou et al. (2021). This gives us a total of 367 samples of RGB codes paired with colour names. We create training and testing splits for different generalisation evaluations in the following way.
B.1.1 GENERALISATION TO UNSEEN CONCEPTS
Random Split We create prompts to perform this experiment in two ways. The first, as reported in §3.2, creates a “train split” i.e., samples within the prompt, by first selecting the 3 primary and 3 secondary colours to always be in the prompt. We then sample 64 other colours from the set of colours to be part of the train split. The prompt therefore contains 70 samples of the question and answer prefixes followed the RGB codes and colour names. For every sample in the test set, we create a new prompt that appends a question prefix and the RGB code of that sample to the end of the prompt, followed by the answer prefix (with no answer). For each prompt, the model is then required to generate an answer. Figure 6 shows the random sample of colours in the prompt and we report results on these in Appendix Table F.1.
Sub-space Split We then perform another experiment to evaluate a models ability to learn concepts within a sub-space, and generalise to a whole new sub-space. Here, the prompt contains 3 primary and 3 secondary colours, as well as 57 other colours that we select in the following way. For each of the 6 primary and secondary colours, we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour.1 Since the model has seen the 6 primary and secondary colours, as well as a certain space of the colour world, we wish to evaluate how well the model can generalise to a new sub-space of the world, by using the colour
1We show visualisations of such colour sub-spaces in the main paper in Figure 4
word-form associations. We then report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. We report results on these in the main paper in Table 2.
B.2 SPATIAL AND CARDINAL CONCEPTS
We consider a “world” to be a 2D matrix filled with 0s and containing exactly one 1, where the position of the 1 (i.e., the determining object) refers to the concept of interest (for e.g., a spatial location like “left” or a cardinal direction like “north”. We create grid worlds filled with 0s and one 1, of row and column ranges from (1, 8) giving us 672 total grid worlds.2 For each grid world, all grounded examples follow the format of World: [1. 0. 0. 0.] followed by Direction: concept name . where the items in red are replaced with instances of different grid worlds and corresponding concepts, based on the location of the 1. We create training and testing splits for different generalisation evaluations in the following way.
B.2.1 GENERALISATION TO UNSEEN WORLDS
Our primary investigation here is to assess whether the models can generalise to new worlds that differ in size, or location of the determining object. Each of the 672 worlds we create differs from one another in either aspect i.e., either the size of the world is different, or, when worlds are of the same size, the location of the 1 is in a different position. Therefore, we randomly sample 20 worlds that contain instances of concepts to serve as part of the prompt. The prompt therefore contains a question prefix followed by the grid world on a new line, followed by the answer prefix and the concept name. We ensure that there is a roughly equal distribution of concepts in the train split (e.g., if there are 8 concepts split over 20 samples, we ensure that 2 of each concept exist in the prompt, and then fill the remainder of the prompt by randomly sampling from the concepts). We then create a new prompt for every sample in the test set by appending the world representation and answer prefix to the end. We report results on this in the main paper in Table 1.
B.2.2 GENERALISATION TO UNSEEN CONCEPTS
Since we wish to assess a model’s ability to generalise to new concepts, here, we hold out concepts in the test set. Specifically, for every domain contain n concepts (e.g., 8 for cardinal directions), the train split contains an equal distribution of n − 1 concepts. We then test on samples that contain the last held-out concept, as well as the seen concepts, to assess how well the model has learned seen concepts, and generalises to unseen concepts. We report results on this in Appendix Table 14, and Figure 7 shows an example split of data that is held-out during test time. We note that this is a random split, unlike the sub-space split that specifically holds out a set of colours based on Euclidean distance.
B.2.3 GENERALISATION TO SUB-SPACES
Similar to the colours, for the cardinal directions, we also report in the main paper, results on a subspace generalisation task. Specifically, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in a different sub-space (e.g., south, west, southwest). We do this for all sub-spaces and average over them. As seen in Table 11, we see that the largest model can perform this task to some degree, achieving a 60% accuracy.
B.2.4 GENERALISATION TO COMPOSITIONS
Here, we specifically test model performance when holding out any instance of compositions of directions. Therefore, the training split contains examples of worlds annotated with the concepts north, south, east, west, and the test split contains examples of compositions of concepts i.e., northeast, southeast, northwest, southwest. We report results in Table 10, and we see that here, even the largest model fails to perform the task. This means that when models have never seen any instance of a composition, they do not perform these compositions themselves, however, as seen in Table
2We exclude grid worlds where the position of the 1 is in an ambiguous position for that concept category e.g., directly in the center for a cardinal task, or in the middle row for a spatial task.
10, and earlier results, when the training data contains some examples of compositions, models can indeed generalise to completely new compositions.
C EVALUATION METRICS
In this section we describe our evaluation metrics in detail, with examples of how certain generations might be scored by each metric.
C.1 SUBSTRING MATCH
For every test instance, there is exactly one string answer that is correct (for e.g., the words “left” or “northeast” or “electric blue”). However, language models are free to generate any number of tokens given a prompt—i.e., they might generate exactly one token, or up to a 100 tokens for any given prompt. We consider the first 5 tokens generated by the model to be the generated answer— and consider a substring match metric to be one that looks at whether or not the model generated answer lies in the ground truth answer. To make this clearer, we provide examples below, denoting a model-generated answer in blue and the ground-truth answer in green. By the substring metric, electric green for electric blue would give an accuracy of 0, however electric or green would
give an accuracy of 1. In practice, we do not see (the large) models generate half-answers (e.g,. simply saying electric). Similarly green for dark green has an accuracy of 1, but dark pink for
dark green would have an accuracy of 0.
C.2 EXACT MATCH
For this metric, the model only gets a perfect accuracy if the model generated answer exactly matches the ground-truth answer (after stripping both for any trailing whitespaces). Therefore saying green
for dark green , or electric green for light electric green would have an accuracy of 0.
C.3 GROUNDING DISTANCE
Equation C.3 shows our calculation of distances in the world. For a certain concept category C (for example, all colour names existing in the dataset), let c1 be the model predicted concept name and c2 be the true grounding. The grounding distance is the Euclidean distance between the two points when the predicted concept does exist in the space of concepts in the world, and is set to an arbitrarily high number (here, 500, which is higher than the maximum distance between any two points in space) when the predicted concept is some other word does not fall in the concept category.
d(c1, c2) = { √
(c1x − c2x)2 + (c1y − c2y)2 + (c1z − c2z)2 for c1 ∈ C 500 for c1 /∈ C
D PROMPT SIZE
The size of the input prompt to a model depends on its maximum content length, which is a hyperparameter fixed before training the model to completion. The question-answer pairs that we use to teach models grounded concepts differ in their lengths based on the concept category. For example, since the spatial and cardinal concepts require grid worlds that could be up to 10 rows and columns wide, a prompt for these categories might contain a fewer number of samples. Since the colour groundings are 3-digit RGB codes, a larger number of samples can be given in a prompt. We report accuracy curves of models when given an increasing number of samples in the prompt in Appendix D. We see that past 20 samples for spatial terms, and 60 samples for colours, models have no significant increase in performance. When we report results in Tables ?? and Table 1, we report numbers for a fixed number of samples in a prompt (e.g., 20 vs. 60 respectively) for all models.
Interestingly, once we get past a certain number of prompts, there is no significant increase in model performance on the task. This result hints at two things. First, the larger models seem to learn
a task, and generalise to new samples, with only a small number of sample when fine-tuned in a few-shot prompting regime. This seems to hold for the smaller models as well i.e., although they do not learn the task well, increasing the number of samples within a prompt does not significantly increase model performance. Second, the significant difference in performance based on model size hints at the fact that the limiting factor for good performance is the number of parameters of a model. More concretely, the generalisation extent of each model seems to max out at some number of input prompts, with a trend of increasing performance on increasing model size. Therefore, a larger model (than the ones we have here) might be required in order to achieve better grounded generalisation.
E MODEL GENERATIONS
We show example generations from the model in Table 8.
F PRE-TRAINING DATA
F.1 WHAT HAVE MODELS SEEN IN THEIR TRAINING DATA THAT MIGHT BE RELATED TO GROUNDING?
Pre-trained language models have been trained on large text corpora available on the internet—a diverse source of information that covers many domains. There is speculation, that the remarkable generalisation performance of these models stems from having seen instances of the task or related information at some point in their training data. However, this data has not been made public, which makes this a hypothesis that is hard to confirm. In our domain of grounded concepts as well, it is unclear what aspects of similar data the model might have had access to during training, or what inductive biases from certain types of data it might have learnt, that allow it good performance on grounded tasks. We attempt to assess this in the following way. We consider all the concept words in every concept category, and all the world representations, and consider these the prompts for the models. For each word, we wish to assess the distribution of text generated, which should, in some sense, reflect the distribution of text seen during training. For example, or the world representations (i.e., a 2D array), we assess how often the model might use a spatial or cardinal term when simply prompted with an array, thus giving us insights into how often the training data might have contain such an array in close proximity to such a word. Similarly, for the colour domain, we evaluate how often the model predicts a correct colour name when prompted with an RGB code. We see that when models are simply prompted with a world representation, they tend to generate similar-
looking world representations, instead of conceptual words (such as “left”) that might be associated with it. When prompted with RGB codes, the GPT-3 model only has a 7.3% accuracy of correctly generating the corresponding colour name in a 5-token sequence after the prompt.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.03 0.02 0.02 Top-1 355 M 0.03 0.04 0.03 Accuracy 774 M 0.04 0.03 0.02 1.5 B 0.07 0.06 0.05
175 B 0.12 0.11 0.10
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.03 0.04 0.04 Top-3 355 M 0.06 0.05 0.04 Accuracy 774 M 0.05 0.07 0.05 1.5 B 0.11 0.10 0.07
175 B 0.23 0.21 0.17
Table 9: Table shows evaluations in the cardinal directions domain, where we show models examples of single directions (north, south, east, west) and test them on compositions of directions (northeast, southeast, northwest, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that all models fail on this task i.e., when they have never seen compositions of terms before, they never predict them at test-time.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.11 0.10 0.05 355 M 0.10 0.11 0.06 774 M 0.13 0.12 0.08
1.5 B 0.13 0.14 0.10 175 B 0.30 0.29 0.08
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.09 0.09 0.08 355 M 0.19 0.17 0.10 774 M 0.17 0.15 0.11
1.5 B 0.21 0.20 0.14 175 B 0.60 0.61 0.09
Table 10: In this table, we show models examples of concepts within a sub-space (north, east, northeast) and test them on concepts in a different sub-space (south, west, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that the largest model can indeed perform on this task i.e., even on only having seen a sub-space of the world (along with compositions) it can generalise to a new sub-space and compositions.
G EXPERIMENTS WITH MASKED LANGUAGE MODELS
The results in the main paper focus on GPT-2 and GPT-3 language models i.e., autoregressive left-toright language models that show impressive capabilities for in-context learning. Here, we provide comparisons to a masked language model, specifically a BERT model (Devlin et al., 2018). For our implementation we use a BERT-base model containing 110 million parameters. Similar to the GPT-x experiments for in-context learning with prompts, we ”teach” the BERT model the space on concepts in the following way.
• BERT zero-shot (no-context) In this setting, for every sample in the test set (e.g., an RGB colour), we create an input prompt that masks the name of the colour, and ask the model to fill in the colour name. The prompt, therefore, would look like RGB: (255, 0, 0) Colour: <mask>. For the same test set, we evaluate the Top-1 and Top-3 accuracy of the model filling in the mask with the correct colour, using the same substring-match metric. • BERT zero-shot (in-context) In this setting, we add several samples into the context of the model, and ask the model to fill in the masked token with the colour name, for all samples in the test set. Since the context-width of BERT is smaller than the GPT models, we can only include a smaller number of samples (around 7).
• BERT zero-shot (fine-tuned) In this setting, we fine-tune the BERT model on 67 samples (the same number of samples that the GPT models got as input in a prompt) and then report performance when tested on all the colours in the test set.
Table 13 shows results of the BERT models when tested on a held-out sub-space of colours and Table 14 shows results when tested on a randomly held-out split of colours. | 1. What is the focus of the paper, and what are the author's main contributions?
2. What are the strengths of the proposed approach, particularly regarding its ability to generalize to unseen concepts?
3. What are the weaknesses of the paper, especially concerning its use of metrics and conclusions drawn from the results?
4. Do you have any concerns about the methodology or experimental design?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This works aims at investigating whether large language models (LMs) pretrained only on texts, can implicitly learn grounded concepts of the world beyond texts. Specifically, the authors test whether the LMs can map some conceptual domains (e.g., direction, color) to grounded world representations (e.g., textualized grid world, RGB representation of colors). The authors give a rather small number of examples as prompts to the models in an in-context learning setup. They find that large LMs like GPT-3 can often output the correct concept for the grounded world representations, even though it’s likely that the model hasn’t seen the grounded world representations in its pretraining. They conclude that the text-only LMs may already learn the grounded representations implicitly, without explicit from-scratch training like in the visual language models.
Review
I think this paper investigates a very interesting problem. The experiments are rather thorough, with different levels of controls (e.g., semantic-invariant transformations to the world representations, generalization to unseen worlds or unseen concepts). The writing is clear and structured as well.
However, I think the concerns that some readers might raise and complain include:
(1) Metric. Is the top-3 accuracy meaningful for the task, especially for the spatial and cardinal problems where the concept space is very small, and for GPT-3 that knows to output in-domain words only? Is the substring metric suitable for the color problem, especially in the “unseen concept” setup? For example, if “light blue” is a seen concept and “dark blue” is the test-time unseen concept, then answering the seen concept “light blue” in the unseen concept setup would result in a perfect accuracy. Would that defeat the purpose of testing generalization to unseen concepts, like in Table 2?
(2) Conclusion drawn from the results. The authors argue that if the LM successfully generates the correct concept based on the grounded representation (likely “unseen” in the pretraining data), it means that the model knows to ground the concept to the non-text world. However, is it possible that the model doesn’t understand the relationship between the concepts and the grounded representations, but instead utilizes a similarity between the test grounded representation and the grounded representations in the in-context prompts? For example, upon seeing the test representation (e.g., [0,1,0,0] in the spatial domain, or RGB (140, 0, 255) in the color domain), the model can use a simple strategy: copying the concept of a bunch of most similar representations in the in-line prompt examples (e.g., [0, 1, 0, 0, 0], or RGB (145, 0, 255)). This strategy would not involve the concept of “left” or “pink”, and is robust to the rotation transformation (while not robust to the random transformation, if each point in the world was transformed independently). This would align with the results in Table 1. To check whether this hypothesis is (partially) true, we can look at experiments like:
A: Test on some real unseen concepts. This was done in the paper, like in spatial and cardinal columns in Table 2, Table 9, 10, 11 (in the appendix). But the performance is not very strong in these cases even for GPT-3 (top-1 accuracy).
B: Test with fewer prompts. This is to prevent the model from memorizing similarity with the prompting examples too much. This was also done in the paper (Figure 6 in the appendix). Again, the performance is not strong, if the number of prompts goes below 20 or 60.
C: Replace all of the concept names with concepts in an unrelated domain (e.g., substituting all “left” and “right” with “apple” and “orange”). If the performance is above baseline in this setup, should we conclude that the LM implicitly learns to map fruit concepts to the grounded spatial world? This was not done in the paper and may be a good control experiment.
(3) Though the paper investigates an interesting problem, the overall takeaway of this work is not very clear to me. How is the analysis useful for future work?
(4) Some details in the paper should be checked. For example, in Section 2.1, the authors say that all models are pretrained on the 40GB OPENAI-WT dataset, but this is not true for GPT-3? Also for the color experiments, it is not clear whether 60 or 6+57 or 70 (as mentioned in B.1.1 in the appendix) prompts were used. |
ICLR | Title
Mapping Language Models to Grounded Conceptual Spaces
Abstract
A fundamental criticism of text-only language models (LMs) is their lack of grounding—that is, the ability to tie a word for which they have learned a representation to its referent in the non-linguistic world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual structure of language, as demonstrated by their ability to answer questions, generate fluent text, or make inferences about entities, objects, and properties that they have never physically observed. In this work we investigate the extent to which the rich conceptual structure that LMs learn indeed reflects the conceptual structure of the non-linguistic world—which is something that LMs have never observed. We do this by testing whether the LMs can learn to map an entire conceptual domain (e.g., direction or colour) onto a grounded world representation given only a small number of examples. For example, we show a model what the word “left” means using a textual depiction of a grid world, and assess how well it can generalise to related concepts, for example, the word “right”, in a similar grid world. We investigate a range of generative language models of varying sizes (including GPT-2 and GPT-3), and see that although the smaller models struggle to perform this mapping, the largest model can not only learn to ground the concepts that it is explicitly taught, but appears to generalise to several instances of unseen concepts as well. Our results suggest an alternative means of building grounded language models: rather than learning grounded representations “from scratch”, it is possible that large text-only models learn a sufficiently rich conceptual structure that could allow them to be grounded in a data-efficient way.
1 INTRODUCTION
Large pre-trained language models (LMs) trained on text corpora have shown remarkable progress on a range of natural language understanding tasks (Radford et al., 2019; Brown et al., 2020). Such models have demonstrated their ability to generate fluent dialogue (Brown et al., 2020), make commonsense inferences Zellers et al. (2019), and reconstruct taxonomies and word relations (Chen et al., 2020). However, it has been argued that true meaning cannot be learned from the form of language alone (i.e., from text) because it is a word’s use in the non-linguistic world that imparts it meaning (Bender & Koller, 2020; Bisk et al., 2020). For example, although LMs might learn from textual co-occurrences that the words north and south are opposites, the grounded meaning of these words, i.e., the direction that you should travel if you are told to go north, is something to which these models, by definition, do not have access during training.
While it is indisputable that text-only models do not learn representations of concepts that are grounded in the non-text world, it is possible for the structure of relations between concepts in text form to be identical to what a grounded model would learn. In principle, it is therefore possible for a text-only model’s conceptual space to be isomorphic to the “true” (i.e., grounded) conceptual space (Merrill et al., 2021). In this work, we investigate whether this is the case by asking whether models can learn to ground an entire domain (e.g., direction) after grounding only a subset of the points in that domain (e.g., left). Specifically, for generative LMs that have been trained only on large text corpora, we “orient” the models by showing them how some word forms (that they have learned during training) are used in simple text worlds—for example, what the direction north maps
to in a textual representation of a grid world (see Figure 1). We then evaluate two types of generalisation. First (§3.1), we evaluate generalisation to unseen worlds. For example, if the model has seen several realisations of the word north in different grid worlds, can it correctly identify north in an unseen world (e.g., one of a different size or shape)? Second (§3.2), we evaluate generalisation to unseen but related concepts. For example, if the model has been shown grounded representations of north and east, can it correctly identify south and west, even though it was never shown them? We find that although the small language models (GPT-2 models that contain on the order of 100M parameters) cannot perform either generalisation well, the largest model (a GPT-3 model containing 175B parameters) can indeed learn groundings in the conceptual worlds we build. We analyse the predictions and errors made by models (§3.3) and find that the errors made by the small models are often due to a failure to recognize the domain, and thus generating random words by default. In contrast, the errors made by the largest model are often intuitive, e.g., predicting in-domain concepts that are reasonable substitutes for the target (e.g., maroon versus dark red).
2 EXPERIMENTAL DESIGN
2.1 MODELS
We test five autoregressive Transformer language models (Vaswani et al., 2017) of varying size, specifically the GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) models. Our smallest model contains 124M parameters, and the others follow increasing model sizes (355M, 774M, 1.5B and 175B parameters). All models are pre-trained on differently filtered versions of the OpenAI Web-Text dataset Radford et al. (2019), composed of 40GB of English web text available on the internet. We generate up to 5 tokens per prompt and, to improve the robustness of our analyses, generate 3 samples per prompt. We use a temperature of 1 during generation and sample from the softmax probabilities produced at each time step using nucleus sampling (Holtzman et al., 2019) with p=0.85. We include more detail on the models and their training data in Appendix A.
2.2 IN-CONTEXT LEARNING
Several studies (Brown et al., 2020; Reynolds & McDonell, 2021) have shown that instead of finetuning generative LMs–i.e., a process that updates parameters learned by the model during pretraining–it is possible to achieve competitive performance by giving the model a small number of training examples within the prompt. This is often referred to as “in-context learning” or “fewshot prompting”. Specifically, a prompt includes n task examples that include a question prefix (e.g., “World:”) followed by the question, and an answer prefix (e.g., “Answer:”) followed by the answer to the question. After giving the model n examples in this manner, the prompt ends with a new question and only an answer prefix after which the model is expected to generate an answer to the last question, following the prompt format it has seen. By enumerating over all questions in the test set, we can obtain a model-generated answer for every test set question that we wish to evaluate. There are no gradient updates to any model parameters using this approach.
2.3 GROUNDED CONCEPT DOMAINS
The models that we test, by construction, can receive only text inputs. Thus, we focus on a set of grounded domains for which it is possible to faithfully represent the grounded meaning in text form. We briefly describe these domains below, summarise them in Figure 1, and describe in detail, the prompt creation process for each generalisation task in Sections §3.2 and §3.1. We discuss the possibility of expanding this set of concepts in future work in Section §4.
Spatial Terms We consider 6 spatial concepts: left, right, up, down, top, bottom. Each of the above concepts can be represented in a grid world using the position of a special character (here, a ‘1’) in the world. To do this, we create grid world environments of varying sizes (where the number of rows and columns ranges from 1 to 8), where each world consists of ‘0’s and a single ‘1’.
Cardinal Directions We consider eight cardinal directions: north, south, east, west, northeast, northwest, southeast, southwest. These are similar to spatial terms, except that they include compo-
Published as a conference paper at ICLR 2022
FINAL
sitional terms (e.g., northeast). We use grid-worlds of the same format as spatial terms to represent these concepts.
Colour Terms We consider colour terms in a three-dimensional space, using a dataset of 367 RGB colours (Abdou et al., 2021) that contains colour names (e.g., red, cyan, forest green) each associated with an RGB code (e.g., (255, 0, 0)). Therefore, the world representation in this case is not a grid-world, but an RGB code associated with every colour name. Figure 1 shows example RGB codes and colours that serve as part of a prompt given to the models we test.
2.4 ROTATED WORLDS TO CONTROL FOR MEMORISATION
Motivation The GPT-x models that we use have been trained on the CommonCrawl corpus (Radford et al. (2019)), a collection of documents that contains web text freely available on the internet. Since we provide instantiations of grounded concepts in text form, it is very plausible that the domains described above have been encountered verbatim during training. For example, for spatial terms such as left, a model might have seen instances of matrices and linear algebra terms with the word left in close proximity; for colour terms, tables that map RGB colour codes to colour names are pervasive in web-text. We therefore include a control task in our experimental setup such that the model cannot succeed using simple memorisation. Rather, success on a task requires the model to truly perform a conceptual mapping between the ungrounded and grounded representations.
Isomorphism as a Control We use the concept of isomorphism to control for memorisation. Intuitively, imagine a situation where you are lost in the woods. Once pointed in the direction of north, you instantly know which way is south. However, this ability is not dependent on having been correctly pointed north–if someone were to incorrectly point east and tell you this was north, you would readily infer west to be south. This is because your reasoning depends on your knowledge of the relation between north and south, and between north and the world, rather than on having memorized the “true” grounding of each concept independently.
By the same logic, if a model is learning a grounding function, this should be dependent on the world in which it is being grounded. For example, in different worlds where the word red grounds to different points in space, the word blue, by analogy, shares a fixed conceptual relation to the word red. Therefore, it should ground in correspondingly equidistant ways in the two worlds. A model’s ability to learn two different grounding functions f vs. g, should not be dependent on what the actual points ground to, as long as the structural relations between concepts in the space are preserved. Further, this should hold for all such isomorphic transformations that preserve the structure of the space, and importantly, it should not hold for random perturbations that distort the structure of the space, since such distortions would break the assumption that the relation between red and blue in ungrounded space is analogous to that in grounded space.
Implementation In the colour domain, since colour concepts exist in a 3D world of RGB codes, we rotate each point around a fixed axis by a certain degree to create a new isomorphic world. We repeat this control three times (for 90◦, 180◦ and 270◦ rotations) and average over the rotations in our evaluations. For cardinal directions, we rotate each cardinal concept by 90◦ in two dimensions, and do this three times and average over rotations. Since the spatial terms exist as pairs, we simply
swap the groundings of alternate terms (e.g., left and right). For random worlds that do not preserve the structure between word forms, we randomly assign a concept name (e.g., red) to a point in the world, and we do this for all concept names and categories to obtain a random world for each. Figure 2 shows example transformations of colours on rotating by 90◦, as well as randomly rotating points. Isomorphism and Rotated Worlds
FINAL
FINAL
2.5 EVALUATIONS
Experimental Logic We report model performance in three settings: the original (“true”) world (e.g., that in which red maps to the actual RGB code for red), an average over three rotated worlds (e.g., worlds in which red maps to some other RGB code, but relations between colors are preserved), and a random world (in which relations between mappings are not preserved). If a model is performing the conceptual mapping in the desired way, we expect that performance should be high in the true world and that there should be no significant degradation in performance when moving to rotated worlds. We also expect that performance should be low in the random world.
Metrics When given a prompt, a generative LM is free to generate any number of tokens until having generated the EOS token that halts further generation. Since classification tasks usually correspond to labels that contain only a few words, the standard approach is to let the model generate an entire sequence of text and to then cut off the generation to the first n tokens (where typically n < 10). From the prompting mechanism, the model should learn to follow the prompt format to generate one label (e.g., instead of every label in succession), and since it does not receive any gradient updates during this learning, there is no incentive for it to predict all related labels within a n-token generation. We see that this is true in practice and show example generations in Appendix E. We set n = 5 and then report the following metrics.
TOP-1 ACCURACY If the ground-truth answer or any substring thereof lies in the generated answer (i.e., the first n tokens of the full generation), the model gets perfect accuracy. If the ground-truth answer does not exist in the generated answer, or exists in the generation outside of the cutoff limit, the model gets a 0 accuracy. For example, for the ground truth answer deep tuscan red, if the model generated answer is tuscan red or red the model gets a perfect accuracy, but if the model generated answer is deep red or wine or vermilion, the model gets an accuracy of 0. We also compute the same metric using exact match (i.e., the model is only correct if it generates exactly deep tuscan red). We find the values are lower but the trends are the same; see Appendix ??..
TOP-3 ACCURACY This metric is analogous to Top-1 except that instead of only considering the most probable generation from the model, we collect the second and third most probable answer sequences as well. The model gets a perfect accuracy if the correct answer exists in any of the three generated answers. If the correct answer does not exist in any of the 3 generated answers, or exists in any of the generations outside of the cutoff limit, the model gets an accuracy of 0. Again, see Appendix ?? for results using exact match.
GROUNDING DISTANCE For analysis purposes (§3.3), we wish to assess how far off models are in cases where they are wrong. For this, we need to quantify the distance between the model’s predicted answer and the ground truth answer. For example, in the colour domain, the distance between two points can be computed as the Euclidean distance between two RGB codes. For every answer generated by the model (e.g., the word pink in the colour domain), if the answer is an acceptable grounded term that exists in the world, we can compute its distance to the true grounding (e.g., the colour red). However, if the generated answer was an unrelated word (e.g., the word cat) that does not exist in the same domain, no distance can be computed in the world. Therefore, we calculate a distance metric as the Euclidean distance between two points in space when the generated answer falls in the domain. When the generated answer does not fall in the domain, we set the distance to a number significantly higher than the largest distance between two in-domain concepts. We provide equations and details on calculation of this metric in Appendix C.
Baselines Given that the language models are free to generate any word that exists in their vocabulary as a potential answer, we choose two random baselines over vocabulary words against which to compare model performance. We use R-IV (i.e., random in-vocabulary) to denote a baseline that randomly selects from among all words in the model’s vocabulary. We use R-ID (i.e., random indomain) to denote a baseline that randomly selects from amongst only the in-domain words (e.g., from colour terms); this is 6, 8 and 367 words respectively for the spatial, cardinal and colour categories. Note that the generative LMs do not have such a domain restriction over words, as they are free to choose any token from the full vocabulary (like R-IV).
3 RESULTS
3.1 GENERALISATION TO UNSEEN WORLDS
Our first investigation looks into how well models generalise known concepts to unseen worlds. For example, if a model has seen a few examples of a concept (such as left) depicted in some grid worlds, can it correctly identify an instance of tleft in a different grid world (e.g., one with a different size or orientation)? Note that we can only conduct this type of evaluation in the spatial and cardinal domains, since, for the colour domain, there is only ever one world, with one grounding for each concept (e.g., the colour red has exactly one grounding to a 3-digit RGB code in a 3-D RGB space).
Data We create prompts that include 20 examples of grounded concepts in a set of grid worlds. For each domain (e.g., cardinal directions that contain 8 concepts, and spatial terms that contain 3 pairs of 2 concepts), we include a (roughly) equal sample of concepts among these 20 examples. Then, we append a held-out grid world to the end of the prompt and evaluate whether or not the models generate the correct concept label for these unseen worlds. Figure 3 shows example prompts
given to the model and example generations from three different models. We report results averaged over all generations in Table 1. In general, we see the desired trend in which model performance on the original and rotated worlds is well above that on the random world. We do not see consistent or significant performance degradation when moving from the original to the rotated world, suggesting performance is not due to simple memorisation. Comparing across models, we see that the smaller models struggle to even learn concepts that were taught to them in a few-shot manner. For example, for the spatial category, the performance of the smallest model is below that of a baseline which guesses randomly among in-domain words, suggesting the model even fails to learn the general domain of the task (discussed more in Section 3.3). In contrast, the largest model (GPT-3) has a 45% Top-1 accuracy and a 76% Top-3 accuracy for the spatial category.
3.2 GENERALIZATION TO UNSEEN CONCEPTS
Our primary interest here is in the model’s ability to map its ungrounded conceptual structure to a grounded world. Specifically, we want to see that the model is able to ground an entire conceptual domain when only taught how to ground a small subset of the points in that domain. To test this, we show models example concepts within a sub-space of the domain (e.g., north, east), while holding out other concepts (e.g., south, west). We then test them on instances of held-out concepts. Figure 6 depicts this setup in the colour domain: we train the model using primarily shades of red, but test on shades of blue.
Data To create sub-spaces for the colour domain, we create prompts that contain 3 primary, 3 secondary colours, and 57 other colours within a sub-space, determined in the following way. For a certain colour (e.g., red) we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour. We create 6 such sub-spaces, centered around each of the primary and secondary colours, and report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. For the cardinal directions, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in an entirely different sub-space (e.g., south, west, southwest). The training split therefore contains worlds annotated with one sub-space, and we test on the remaining held-out concepts. We do this for all sub-spaces and average over them.
We report results on all concept categories in Table 2. As before, we see that models do not appear to be exploiting simple memorisation (evidenced by similar performance in original vs. rotated worlds) and that only the largest models appear capable of getting reasonable performance on the task. That said, the largest GPT-3 model achieves impressive results given the difficulty of the task. For example, it achieves over 40% Top-1 accuracy in the color domain, which requires generating a label like violet despite having never seen such a label during training (Figure 4).
3.3 ERROR ANALYSIS
Given the results above, we investigate “how wrong” models are when they fail to produce the expected ground-truth label. One clear trend of the large model is the ability to generate “on topic” (i.e., in-domain) responses, regardless of whether or not those responses are correct. For example, if the expected response to an input is “left”, a model which produces right is wrong in a very different way than a model that produces nonsensical outputs such as “[0[0” or function words such as “the” (see Figure 3). We see that the largest 175B parameter model almost always produces in-domain answers. We evaluate this by checking whether the generation lies in the set of related words for that category (e.g., all colours, spatial, or cardinal words, respectively). We see that the smaller models fail to do this. That is, their generations tend to be unrelated words that might have had high prominence (e.g., function words). On evaluating accuracy of generations being “in-domain”,
we see that the smallest model has only a 53% accuracy while the largest has a 98% accuracy of generating in-domain answers.
Second, we evaluate grounded distance (§2.5) to measure the degree of correctness of responses. In the colour domain, the colours dark red and wine are close enough in space that they might be intuitive alternate answers for one another. However, our top-1 and top-3 metrics only assess string matches and do not account for this. Thus, we look at the distance between the colour denoted by the predicted label (when it exists, i.e., when the model generated a legitimate colour name) and the colour the model was asked to label. The lower this distance is, the “less wrong” the model’s prediction is.
Table 3 reports evaluations measured by grounding distance for the colour domain. For every test instance, we compute the distance between the model predicted grounding and the true grounding as defined in Equation C.3. We then average over all computed distances to report one number that tells us how close, on average, model predictions are to the true grounding in the world. We show example visualisations of colours in RGB space that lie within a certain distance threshold of each other. We see, for the largest model, the distances between model-predicted groundings from the true grounding are significantly lower than random. Such a result suggests that the model’s errors are often justifiable and the scores given by our string-matching metrics might be an underestimate of the model’s true performance.
4 DISCUSSION
Our empirical results suggest that very large LMs (specifically, GPT-3), even when trained only on text, learn a conceptual space that can, at least in some settings, be “grounded” using only a small number of training examples. The fact that these models succeed even in isomorphic rotated worlds suggests that these models are not succeeding via naive memorisation. Rather, this suggests that they may be exploiting something about the conceptual structure of the space learned from text in order to map onto a new space that was not explicitly encountered during training.
A major limitation of our approach is that there are some grounded concepts (e.g., visual and sensory inputs) that cannot be easily encoded in text form. By construction, the LMs that we use are restricted to text-only inputs, thus our focus is on domains (e.g., colours and directions) that have a well-defined textual representation. This is only a small set of all the potential grounded concepts we would wish to teach LMs. Although many forms of data can be coerced into text format (e.g., we represent color using discrete digits to represent RGB space), complex concepts may loose fundamental aspects of their meaning when represented in this way. For example, for color, a coarse-grained notion of numeric proximity, derivable from text (Wallace et al., 2019; Naik et al., 2019), may be sufficient to differentiate the concepts we explore, but for more complex visual inputs (e.g., the output of a CNN image encoder), a text-based numeric representation is unlikely to capture the necessary degree of nuance. Future work would need to consider ways of adapting GPT-3-like models to accept non-textual inputs while still exploiting the text-based conceptual structure.
If such limitations were addressed, our results are suggestive of a potentially promising way in which text-only training could support general purpose, grounded models of meaning. Specifically, our results imply that the conceptual space a model learns from text might be nearly isomorphic to
what it would learn from interacting in a grounded world, and that models can be taught to map between those conceptual spaces without requiring explicit grounding for every concept. This is exciting, as domain-general text corpora are readily available, while domain general multimodal corpora–e.g., containing sufficient information on abstract concepts such as emotions (happy) or time (used to)–might be difficult or impossible to collect. If models like GPT-3 could be adapted to receive non-text prompts (as discussed above), our results suggest that the rich conceptual structure such models learn from text could be bootstrapped into powerful grounded models of language.
5 RELATED WORK
There is a significant amount of work that focuses on understanding how LMs represent and reason about concepts, as well as work that directly attempts to build models that take text inputs and ground them to elements in the world. We situate our work within these two bodies of literature: one that investigates how LMs understand linguistic phenomena and word meaning, and another, that attempts to situate language in models of the world. We describe each body of work below.
Meaning and Understanding in LMs With the advent of large LMs of increasing orders of magnitude, there has been speculation on the capabilities of such models, and whether they truly understand the meaning of the words they are learning representations for. Several works that attempt to probe linguistic phenomena in LMs, show that the representations learned by such models encode syntactic dependencies and coreference information (Tenney et al., 2019) and word-sense information (Chen et al., 2020). Work that investigates the ability of models to form word associations (Hwang et al., 2020), finds that large LMs can indeed perform such a task; suggesting that pretrained LMs not only recognize that entities are related, but can differentiate how they are related. Especially relevant to our work is recent work that investigates alignment of language models to colours (Abdou et al., 2021) or state changes (Li et al., 2021). Our work is complementary to this prior work, and we specifically ask whether the text space can be reliably mapped onto the grounded space.
Natural Language Grounding There is an increasing amount of work, usually at the intersection of NLP and fields like vision and reinforcement learning, that aims to use natural language to instruct agents about aspects of the world. In the case of vision, this could be to learn correspondences between language descriptions and pixels in an image (Eichenberg et al., 2021; Tsimpoukelli et al., 2021), or in the case of RL, to build agents that understand natural language instructions in order to take actions in a world that follow the instruction—for example to navigate to a goal (Artzi & Zettlemoyer, 2013; Patel et al.), or to solve tasks in different languages (Ku et al., 2020). Most of these tasks focus on training LMs from scratch, however usually with inputs that contain both textual information as well as grounded world information. Our work is different in that we attempt to take an LM that was previously trained only on text, and attempt to teach it a concept in the world without re-training it. With only a few samples of what the concept grounds to, we investigate how well large LMs can use the structure of language and associations between word forms in order to generalise to grounded concepts.
6 CONCLUSION
This work investigates the extent to which large language models, trained only on text can be taught to map previously learned word forms onto conceptual worlds. We investigate several generative language models in colour and direction domains, represented as discrete grid worlds given to the model as text input. With only a few examples of such grounded instances, although smaller models struggle to generalise from text to grounded concepts, we see that the largest model does indeed learn the space of concepts that we test it on. We analyse where models fail and see that the smallest models often produce random, unrelated outputs, but the errors made by the largest model are quite intuitive, for example, predicting colours very close in space to the true grounding. We discuss the limitations of focusing on text-only inputs to teach models grounded concepts, as well as the implications of such work for allowing large language models to be mapped, in a data-efficient way, to the grounded world.
APPENDIX
We provide, as supplementary material, additional information about the data generation, models used, examples of model generations, as well as additional results across all models.
A MODELLING DETAILS
We use a GPT-3 model Brown et al. (2020) and four GPT-2 Radford et al. (2019) models from the Hugging Face Transformer Wolf et al. (2019) library. Each of these is a pretrained autoregressive transformer model, trained on the OpenAI WebText corpus, containing around 8 million documents. The top 15 domains by volume in WebText are: Google, Archive, Blogspot, GitHub, NYTimes, Wordpress, Washington Post, Wikia, BBC, The Guardian, eBay, Pastebin, CNN, Yahoo!, and the Huffington Post. Individual model parameters and layers are shown in Table 5. The pretrained models use byte-pair encoding (BPE) tokens Sennrich et al. (2015) to represent frequent symbol sequences in the text, and this tokenisation is performed on all new input prompts to generate text from the model. We report the hyperparameters used by the pretrained model in Table 6.
Parameters Layers
Hyperparameter Selection
B DATA GENERATION
In this section we describe how we create prompt data for the concept categories we wish to evaluate. We describe what the “worlds” look like for each category, as well as detail on the generation of such worlds and division into train and test splits, for colours in Section B.1 and for both spatial and cardinal concepts in Section B.2.
B.1 COLOUR CONCEPTS
We draw from an existing dataset containing RGB codes associated with colour names, for 367 colours. In this concept category, a grounded realisation of a colour in the world, is simply its RGB code i.e., the point that it grounds to in 3-dimensional colour space. Therefore, all grounded examples in prompts follow the format of RGB: (x, y, z) followed by Colour: concept name , where the items in red are replaced with instances of RGB codes and names from the dataset of colours we useAbdou et al. (2021). This gives us a total of 367 samples of RGB codes paired with colour names. We create training and testing splits for different generalisation evaluations in the following way.
B.1.1 GENERALISATION TO UNSEEN CONCEPTS
Random Split We create prompts to perform this experiment in two ways. The first, as reported in §3.2, creates a “train split” i.e., samples within the prompt, by first selecting the 3 primary and 3 secondary colours to always be in the prompt. We then sample 64 other colours from the set of colours to be part of the train split. The prompt therefore contains 70 samples of the question and answer prefixes followed the RGB codes and colour names. For every sample in the test set, we create a new prompt that appends a question prefix and the RGB code of that sample to the end of the prompt, followed by the answer prefix (with no answer). For each prompt, the model is then required to generate an answer. Figure 6 shows the random sample of colours in the prompt and we report results on these in Appendix Table F.1.
Sub-space Split We then perform another experiment to evaluate a models ability to learn concepts within a sub-space, and generalise to a whole new sub-space. Here, the prompt contains 3 primary and 3 secondary colours, as well as 57 other colours that we select in the following way. For each of the 6 primary and secondary colours, we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour.1 Since the model has seen the 6 primary and secondary colours, as well as a certain space of the colour world, we wish to evaluate how well the model can generalise to a new sub-space of the world, by using the colour
1We show visualisations of such colour sub-spaces in the main paper in Figure 4
word-form associations. We then report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. We report results on these in the main paper in Table 2.
B.2 SPATIAL AND CARDINAL CONCEPTS
We consider a “world” to be a 2D matrix filled with 0s and containing exactly one 1, where the position of the 1 (i.e., the determining object) refers to the concept of interest (for e.g., a spatial location like “left” or a cardinal direction like “north”. We create grid worlds filled with 0s and one 1, of row and column ranges from (1, 8) giving us 672 total grid worlds.2 For each grid world, all grounded examples follow the format of World: [1. 0. 0. 0.] followed by Direction: concept name . where the items in red are replaced with instances of different grid worlds and corresponding concepts, based on the location of the 1. We create training and testing splits for different generalisation evaluations in the following way.
B.2.1 GENERALISATION TO UNSEEN WORLDS
Our primary investigation here is to assess whether the models can generalise to new worlds that differ in size, or location of the determining object. Each of the 672 worlds we create differs from one another in either aspect i.e., either the size of the world is different, or, when worlds are of the same size, the location of the 1 is in a different position. Therefore, we randomly sample 20 worlds that contain instances of concepts to serve as part of the prompt. The prompt therefore contains a question prefix followed by the grid world on a new line, followed by the answer prefix and the concept name. We ensure that there is a roughly equal distribution of concepts in the train split (e.g., if there are 8 concepts split over 20 samples, we ensure that 2 of each concept exist in the prompt, and then fill the remainder of the prompt by randomly sampling from the concepts). We then create a new prompt for every sample in the test set by appending the world representation and answer prefix to the end. We report results on this in the main paper in Table 1.
B.2.2 GENERALISATION TO UNSEEN CONCEPTS
Since we wish to assess a model’s ability to generalise to new concepts, here, we hold out concepts in the test set. Specifically, for every domain contain n concepts (e.g., 8 for cardinal directions), the train split contains an equal distribution of n − 1 concepts. We then test on samples that contain the last held-out concept, as well as the seen concepts, to assess how well the model has learned seen concepts, and generalises to unseen concepts. We report results on this in Appendix Table 14, and Figure 7 shows an example split of data that is held-out during test time. We note that this is a random split, unlike the sub-space split that specifically holds out a set of colours based on Euclidean distance.
B.2.3 GENERALISATION TO SUB-SPACES
Similar to the colours, for the cardinal directions, we also report in the main paper, results on a subspace generalisation task. Specifically, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in a different sub-space (e.g., south, west, southwest). We do this for all sub-spaces and average over them. As seen in Table 11, we see that the largest model can perform this task to some degree, achieving a 60% accuracy.
B.2.4 GENERALISATION TO COMPOSITIONS
Here, we specifically test model performance when holding out any instance of compositions of directions. Therefore, the training split contains examples of worlds annotated with the concepts north, south, east, west, and the test split contains examples of compositions of concepts i.e., northeast, southeast, northwest, southwest. We report results in Table 10, and we see that here, even the largest model fails to perform the task. This means that when models have never seen any instance of a composition, they do not perform these compositions themselves, however, as seen in Table
2We exclude grid worlds where the position of the 1 is in an ambiguous position for that concept category e.g., directly in the center for a cardinal task, or in the middle row for a spatial task.
10, and earlier results, when the training data contains some examples of compositions, models can indeed generalise to completely new compositions.
C EVALUATION METRICS
In this section we describe our evaluation metrics in detail, with examples of how certain generations might be scored by each metric.
C.1 SUBSTRING MATCH
For every test instance, there is exactly one string answer that is correct (for e.g., the words “left” or “northeast” or “electric blue”). However, language models are free to generate any number of tokens given a prompt—i.e., they might generate exactly one token, or up to a 100 tokens for any given prompt. We consider the first 5 tokens generated by the model to be the generated answer— and consider a substring match metric to be one that looks at whether or not the model generated answer lies in the ground truth answer. To make this clearer, we provide examples below, denoting a model-generated answer in blue and the ground-truth answer in green. By the substring metric, electric green for electric blue would give an accuracy of 0, however electric or green would
give an accuracy of 1. In practice, we do not see (the large) models generate half-answers (e.g,. simply saying electric). Similarly green for dark green has an accuracy of 1, but dark pink for
dark green would have an accuracy of 0.
C.2 EXACT MATCH
For this metric, the model only gets a perfect accuracy if the model generated answer exactly matches the ground-truth answer (after stripping both for any trailing whitespaces). Therefore saying green
for dark green , or electric green for light electric green would have an accuracy of 0.
C.3 GROUNDING DISTANCE
Equation C.3 shows our calculation of distances in the world. For a certain concept category C (for example, all colour names existing in the dataset), let c1 be the model predicted concept name and c2 be the true grounding. The grounding distance is the Euclidean distance between the two points when the predicted concept does exist in the space of concepts in the world, and is set to an arbitrarily high number (here, 500, which is higher than the maximum distance between any two points in space) when the predicted concept is some other word does not fall in the concept category.
d(c1, c2) = { √
(c1x − c2x)2 + (c1y − c2y)2 + (c1z − c2z)2 for c1 ∈ C 500 for c1 /∈ C
D PROMPT SIZE
The size of the input prompt to a model depends on its maximum content length, which is a hyperparameter fixed before training the model to completion. The question-answer pairs that we use to teach models grounded concepts differ in their lengths based on the concept category. For example, since the spatial and cardinal concepts require grid worlds that could be up to 10 rows and columns wide, a prompt for these categories might contain a fewer number of samples. Since the colour groundings are 3-digit RGB codes, a larger number of samples can be given in a prompt. We report accuracy curves of models when given an increasing number of samples in the prompt in Appendix D. We see that past 20 samples for spatial terms, and 60 samples for colours, models have no significant increase in performance. When we report results in Tables ?? and Table 1, we report numbers for a fixed number of samples in a prompt (e.g., 20 vs. 60 respectively) for all models.
Interestingly, once we get past a certain number of prompts, there is no significant increase in model performance on the task. This result hints at two things. First, the larger models seem to learn
a task, and generalise to new samples, with only a small number of sample when fine-tuned in a few-shot prompting regime. This seems to hold for the smaller models as well i.e., although they do not learn the task well, increasing the number of samples within a prompt does not significantly increase model performance. Second, the significant difference in performance based on model size hints at the fact that the limiting factor for good performance is the number of parameters of a model. More concretely, the generalisation extent of each model seems to max out at some number of input prompts, with a trend of increasing performance on increasing model size. Therefore, a larger model (than the ones we have here) might be required in order to achieve better grounded generalisation.
E MODEL GENERATIONS
We show example generations from the model in Table 8.
F PRE-TRAINING DATA
F.1 WHAT HAVE MODELS SEEN IN THEIR TRAINING DATA THAT MIGHT BE RELATED TO GROUNDING?
Pre-trained language models have been trained on large text corpora available on the internet—a diverse source of information that covers many domains. There is speculation, that the remarkable generalisation performance of these models stems from having seen instances of the task or related information at some point in their training data. However, this data has not been made public, which makes this a hypothesis that is hard to confirm. In our domain of grounded concepts as well, it is unclear what aspects of similar data the model might have had access to during training, or what inductive biases from certain types of data it might have learnt, that allow it good performance on grounded tasks. We attempt to assess this in the following way. We consider all the concept words in every concept category, and all the world representations, and consider these the prompts for the models. For each word, we wish to assess the distribution of text generated, which should, in some sense, reflect the distribution of text seen during training. For example, or the world representations (i.e., a 2D array), we assess how often the model might use a spatial or cardinal term when simply prompted with an array, thus giving us insights into how often the training data might have contain such an array in close proximity to such a word. Similarly, for the colour domain, we evaluate how often the model predicts a correct colour name when prompted with an RGB code. We see that when models are simply prompted with a world representation, they tend to generate similar-
looking world representations, instead of conceptual words (such as “left”) that might be associated with it. When prompted with RGB codes, the GPT-3 model only has a 7.3% accuracy of correctly generating the corresponding colour name in a 5-token sequence after the prompt.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.03 0.02 0.02 Top-1 355 M 0.03 0.04 0.03 Accuracy 774 M 0.04 0.03 0.02 1.5 B 0.07 0.06 0.05
175 B 0.12 0.11 0.10
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.03 0.04 0.04 Top-3 355 M 0.06 0.05 0.04 Accuracy 774 M 0.05 0.07 0.05 1.5 B 0.11 0.10 0.07
175 B 0.23 0.21 0.17
Table 9: Table shows evaluations in the cardinal directions domain, where we show models examples of single directions (north, south, east, west) and test them on compositions of directions (northeast, southeast, northwest, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that all models fail on this task i.e., when they have never seen compositions of terms before, they never predict them at test-time.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.11 0.10 0.05 355 M 0.10 0.11 0.06 774 M 0.13 0.12 0.08
1.5 B 0.13 0.14 0.10 175 B 0.30 0.29 0.08
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.09 0.09 0.08 355 M 0.19 0.17 0.10 774 M 0.17 0.15 0.11
1.5 B 0.21 0.20 0.14 175 B 0.60 0.61 0.09
Table 10: In this table, we show models examples of concepts within a sub-space (north, east, northeast) and test them on concepts in a different sub-space (south, west, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that the largest model can indeed perform on this task i.e., even on only having seen a sub-space of the world (along with compositions) it can generalise to a new sub-space and compositions.
G EXPERIMENTS WITH MASKED LANGUAGE MODELS
The results in the main paper focus on GPT-2 and GPT-3 language models i.e., autoregressive left-toright language models that show impressive capabilities for in-context learning. Here, we provide comparisons to a masked language model, specifically a BERT model (Devlin et al., 2018). For our implementation we use a BERT-base model containing 110 million parameters. Similar to the GPT-x experiments for in-context learning with prompts, we ”teach” the BERT model the space on concepts in the following way.
• BERT zero-shot (no-context) In this setting, for every sample in the test set (e.g., an RGB colour), we create an input prompt that masks the name of the colour, and ask the model to fill in the colour name. The prompt, therefore, would look like RGB: (255, 0, 0) Colour: <mask>. For the same test set, we evaluate the Top-1 and Top-3 accuracy of the model filling in the mask with the correct colour, using the same substring-match metric. • BERT zero-shot (in-context) In this setting, we add several samples into the context of the model, and ask the model to fill in the masked token with the colour name, for all samples in the test set. Since the context-width of BERT is smaller than the GPT models, we can only include a smaller number of samples (around 7).
• BERT zero-shot (fine-tuned) In this setting, we fine-tune the BERT model on 67 samples (the same number of samples that the GPT models got as input in a prompt) and then report performance when tested on all the colours in the test set.
Table 13 shows results of the BERT models when tested on a held-out sub-space of colours and Table 14 shows results when tested on a randomly held-out split of colours. | 1. What is the focus of the paper regarding text-only input and grounded models?
2. What are the strengths of the proposed approach, particularly in its explanation and coherence?
3. What are the weaknesses of the paper, especially regarding its statements and assumptions?
4. How does the reviewer assess the significance of the work and its contributions to the field?
5. Are there any suggestions for improvements or further research in the area? | Summary Of The Paper
Review | Summary Of The Paper
The authors present their work looking at how text-only input can create what they call a "grounded" model. The results support the idea that a small input of text-only data can give rise to the models behaving in such ways as to generalise over things like left and right or colours. This indicates, the perhaps unsurprising conclusion, that language is able to, on a meta level, encode some aspects of groundedness, as defined by the authors, and the perhaps more impressive conclusion that the models can indeed tap into the latent structure found in language using a few examples and thereby generate useful isomophisms.
Review
I think the modelling work presented is useful and interesting — and importantly: explained well and coherently. However, I find many statements not thought out as much as they deserve. For one, colours are as relative as left/right. Something red looks dramatically different under white light than it does under other colours of light, and so on. Also see visual illusions where the same shade of grey looks like different colours as a function of where it is in the image. This is why categorical cognition plays a role in colours in humans. The authors could at minimum note this, if they wish. Another example is on the equivocation of "text" and "language" throughout the manuscript. I also find it a bit strange to say "north" in humans is not grounded. I assume it is always grounded to where the Sun is and what hemisphere we are in and so if somebody were to lie to us about where north is, we would eventually figure it out when we saw the movement of shadows, etc. There is a lot of research on what humans indeed do here, so some literature review could be done if claims need to be made about actual human behaviour.
Overall, however, I find the modelling here very interesting and compelling. I like that the authors explored a pre-trained model. Perhaps providing a coherent definition of "grounded" versus "ungrounded" might be useful to help formalise the distinction as used by the authors. |
ICLR | Title
Mapping Language Models to Grounded Conceptual Spaces
Abstract
A fundamental criticism of text-only language models (LMs) is their lack of grounding—that is, the ability to tie a word for which they have learned a representation to its referent in the non-linguistic world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual structure of language, as demonstrated by their ability to answer questions, generate fluent text, or make inferences about entities, objects, and properties that they have never physically observed. In this work we investigate the extent to which the rich conceptual structure that LMs learn indeed reflects the conceptual structure of the non-linguistic world—which is something that LMs have never observed. We do this by testing whether the LMs can learn to map an entire conceptual domain (e.g., direction or colour) onto a grounded world representation given only a small number of examples. For example, we show a model what the word “left” means using a textual depiction of a grid world, and assess how well it can generalise to related concepts, for example, the word “right”, in a similar grid world. We investigate a range of generative language models of varying sizes (including GPT-2 and GPT-3), and see that although the smaller models struggle to perform this mapping, the largest model can not only learn to ground the concepts that it is explicitly taught, but appears to generalise to several instances of unseen concepts as well. Our results suggest an alternative means of building grounded language models: rather than learning grounded representations “from scratch”, it is possible that large text-only models learn a sufficiently rich conceptual structure that could allow them to be grounded in a data-efficient way.
1 INTRODUCTION
Large pre-trained language models (LMs) trained on text corpora have shown remarkable progress on a range of natural language understanding tasks (Radford et al., 2019; Brown et al., 2020). Such models have demonstrated their ability to generate fluent dialogue (Brown et al., 2020), make commonsense inferences Zellers et al. (2019), and reconstruct taxonomies and word relations (Chen et al., 2020). However, it has been argued that true meaning cannot be learned from the form of language alone (i.e., from text) because it is a word’s use in the non-linguistic world that imparts it meaning (Bender & Koller, 2020; Bisk et al., 2020). For example, although LMs might learn from textual co-occurrences that the words north and south are opposites, the grounded meaning of these words, i.e., the direction that you should travel if you are told to go north, is something to which these models, by definition, do not have access during training.
While it is indisputable that text-only models do not learn representations of concepts that are grounded in the non-text world, it is possible for the structure of relations between concepts in text form to be identical to what a grounded model would learn. In principle, it is therefore possible for a text-only model’s conceptual space to be isomorphic to the “true” (i.e., grounded) conceptual space (Merrill et al., 2021). In this work, we investigate whether this is the case by asking whether models can learn to ground an entire domain (e.g., direction) after grounding only a subset of the points in that domain (e.g., left). Specifically, for generative LMs that have been trained only on large text corpora, we “orient” the models by showing them how some word forms (that they have learned during training) are used in simple text worlds—for example, what the direction north maps
to in a textual representation of a grid world (see Figure 1). We then evaluate two types of generalisation. First (§3.1), we evaluate generalisation to unseen worlds. For example, if the model has seen several realisations of the word north in different grid worlds, can it correctly identify north in an unseen world (e.g., one of a different size or shape)? Second (§3.2), we evaluate generalisation to unseen but related concepts. For example, if the model has been shown grounded representations of north and east, can it correctly identify south and west, even though it was never shown them? We find that although the small language models (GPT-2 models that contain on the order of 100M parameters) cannot perform either generalisation well, the largest model (a GPT-3 model containing 175B parameters) can indeed learn groundings in the conceptual worlds we build. We analyse the predictions and errors made by models (§3.3) and find that the errors made by the small models are often due to a failure to recognize the domain, and thus generating random words by default. In contrast, the errors made by the largest model are often intuitive, e.g., predicting in-domain concepts that are reasonable substitutes for the target (e.g., maroon versus dark red).
2 EXPERIMENTAL DESIGN
2.1 MODELS
We test five autoregressive Transformer language models (Vaswani et al., 2017) of varying size, specifically the GPT-2 (Radford et al., 2019) and GPT-3 (Brown et al., 2020) models. Our smallest model contains 124M parameters, and the others follow increasing model sizes (355M, 774M, 1.5B and 175B parameters). All models are pre-trained on differently filtered versions of the OpenAI Web-Text dataset Radford et al. (2019), composed of 40GB of English web text available on the internet. We generate up to 5 tokens per prompt and, to improve the robustness of our analyses, generate 3 samples per prompt. We use a temperature of 1 during generation and sample from the softmax probabilities produced at each time step using nucleus sampling (Holtzman et al., 2019) with p=0.85. We include more detail on the models and their training data in Appendix A.
2.2 IN-CONTEXT LEARNING
Several studies (Brown et al., 2020; Reynolds & McDonell, 2021) have shown that instead of finetuning generative LMs–i.e., a process that updates parameters learned by the model during pretraining–it is possible to achieve competitive performance by giving the model a small number of training examples within the prompt. This is often referred to as “in-context learning” or “fewshot prompting”. Specifically, a prompt includes n task examples that include a question prefix (e.g., “World:”) followed by the question, and an answer prefix (e.g., “Answer:”) followed by the answer to the question. After giving the model n examples in this manner, the prompt ends with a new question and only an answer prefix after which the model is expected to generate an answer to the last question, following the prompt format it has seen. By enumerating over all questions in the test set, we can obtain a model-generated answer for every test set question that we wish to evaluate. There are no gradient updates to any model parameters using this approach.
2.3 GROUNDED CONCEPT DOMAINS
The models that we test, by construction, can receive only text inputs. Thus, we focus on a set of grounded domains for which it is possible to faithfully represent the grounded meaning in text form. We briefly describe these domains below, summarise them in Figure 1, and describe in detail, the prompt creation process for each generalisation task in Sections §3.2 and §3.1. We discuss the possibility of expanding this set of concepts in future work in Section §4.
Spatial Terms We consider 6 spatial concepts: left, right, up, down, top, bottom. Each of the above concepts can be represented in a grid world using the position of a special character (here, a ‘1’) in the world. To do this, we create grid world environments of varying sizes (where the number of rows and columns ranges from 1 to 8), where each world consists of ‘0’s and a single ‘1’.
Cardinal Directions We consider eight cardinal directions: north, south, east, west, northeast, northwest, southeast, southwest. These are similar to spatial terms, except that they include compo-
Published as a conference paper at ICLR 2022
FINAL
sitional terms (e.g., northeast). We use grid-worlds of the same format as spatial terms to represent these concepts.
Colour Terms We consider colour terms in a three-dimensional space, using a dataset of 367 RGB colours (Abdou et al., 2021) that contains colour names (e.g., red, cyan, forest green) each associated with an RGB code (e.g., (255, 0, 0)). Therefore, the world representation in this case is not a grid-world, but an RGB code associated with every colour name. Figure 1 shows example RGB codes and colours that serve as part of a prompt given to the models we test.
2.4 ROTATED WORLDS TO CONTROL FOR MEMORISATION
Motivation The GPT-x models that we use have been trained on the CommonCrawl corpus (Radford et al. (2019)), a collection of documents that contains web text freely available on the internet. Since we provide instantiations of grounded concepts in text form, it is very plausible that the domains described above have been encountered verbatim during training. For example, for spatial terms such as left, a model might have seen instances of matrices and linear algebra terms with the word left in close proximity; for colour terms, tables that map RGB colour codes to colour names are pervasive in web-text. We therefore include a control task in our experimental setup such that the model cannot succeed using simple memorisation. Rather, success on a task requires the model to truly perform a conceptual mapping between the ungrounded and grounded representations.
Isomorphism as a Control We use the concept of isomorphism to control for memorisation. Intuitively, imagine a situation where you are lost in the woods. Once pointed in the direction of north, you instantly know which way is south. However, this ability is not dependent on having been correctly pointed north–if someone were to incorrectly point east and tell you this was north, you would readily infer west to be south. This is because your reasoning depends on your knowledge of the relation between north and south, and between north and the world, rather than on having memorized the “true” grounding of each concept independently.
By the same logic, if a model is learning a grounding function, this should be dependent on the world in which it is being grounded. For example, in different worlds where the word red grounds to different points in space, the word blue, by analogy, shares a fixed conceptual relation to the word red. Therefore, it should ground in correspondingly equidistant ways in the two worlds. A model’s ability to learn two different grounding functions f vs. g, should not be dependent on what the actual points ground to, as long as the structural relations between concepts in the space are preserved. Further, this should hold for all such isomorphic transformations that preserve the structure of the space, and importantly, it should not hold for random perturbations that distort the structure of the space, since such distortions would break the assumption that the relation between red and blue in ungrounded space is analogous to that in grounded space.
Implementation In the colour domain, since colour concepts exist in a 3D world of RGB codes, we rotate each point around a fixed axis by a certain degree to create a new isomorphic world. We repeat this control three times (for 90◦, 180◦ and 270◦ rotations) and average over the rotations in our evaluations. For cardinal directions, we rotate each cardinal concept by 90◦ in two dimensions, and do this three times and average over rotations. Since the spatial terms exist as pairs, we simply
swap the groundings of alternate terms (e.g., left and right). For random worlds that do not preserve the structure between word forms, we randomly assign a concept name (e.g., red) to a point in the world, and we do this for all concept names and categories to obtain a random world for each. Figure 2 shows example transformations of colours on rotating by 90◦, as well as randomly rotating points. Isomorphism and Rotated Worlds
FINAL
FINAL
2.5 EVALUATIONS
Experimental Logic We report model performance in three settings: the original (“true”) world (e.g., that in which red maps to the actual RGB code for red), an average over three rotated worlds (e.g., worlds in which red maps to some other RGB code, but relations between colors are preserved), and a random world (in which relations between mappings are not preserved). If a model is performing the conceptual mapping in the desired way, we expect that performance should be high in the true world and that there should be no significant degradation in performance when moving to rotated worlds. We also expect that performance should be low in the random world.
Metrics When given a prompt, a generative LM is free to generate any number of tokens until having generated the EOS token that halts further generation. Since classification tasks usually correspond to labels that contain only a few words, the standard approach is to let the model generate an entire sequence of text and to then cut off the generation to the first n tokens (where typically n < 10). From the prompting mechanism, the model should learn to follow the prompt format to generate one label (e.g., instead of every label in succession), and since it does not receive any gradient updates during this learning, there is no incentive for it to predict all related labels within a n-token generation. We see that this is true in practice and show example generations in Appendix E. We set n = 5 and then report the following metrics.
TOP-1 ACCURACY If the ground-truth answer or any substring thereof lies in the generated answer (i.e., the first n tokens of the full generation), the model gets perfect accuracy. If the ground-truth answer does not exist in the generated answer, or exists in the generation outside of the cutoff limit, the model gets a 0 accuracy. For example, for the ground truth answer deep tuscan red, if the model generated answer is tuscan red or red the model gets a perfect accuracy, but if the model generated answer is deep red or wine or vermilion, the model gets an accuracy of 0. We also compute the same metric using exact match (i.e., the model is only correct if it generates exactly deep tuscan red). We find the values are lower but the trends are the same; see Appendix ??..
TOP-3 ACCURACY This metric is analogous to Top-1 except that instead of only considering the most probable generation from the model, we collect the second and third most probable answer sequences as well. The model gets a perfect accuracy if the correct answer exists in any of the three generated answers. If the correct answer does not exist in any of the 3 generated answers, or exists in any of the generations outside of the cutoff limit, the model gets an accuracy of 0. Again, see Appendix ?? for results using exact match.
GROUNDING DISTANCE For analysis purposes (§3.3), we wish to assess how far off models are in cases where they are wrong. For this, we need to quantify the distance between the model’s predicted answer and the ground truth answer. For example, in the colour domain, the distance between two points can be computed as the Euclidean distance between two RGB codes. For every answer generated by the model (e.g., the word pink in the colour domain), if the answer is an acceptable grounded term that exists in the world, we can compute its distance to the true grounding (e.g., the colour red). However, if the generated answer was an unrelated word (e.g., the word cat) that does not exist in the same domain, no distance can be computed in the world. Therefore, we calculate a distance metric as the Euclidean distance between two points in space when the generated answer falls in the domain. When the generated answer does not fall in the domain, we set the distance to a number significantly higher than the largest distance between two in-domain concepts. We provide equations and details on calculation of this metric in Appendix C.
Baselines Given that the language models are free to generate any word that exists in their vocabulary as a potential answer, we choose two random baselines over vocabulary words against which to compare model performance. We use R-IV (i.e., random in-vocabulary) to denote a baseline that randomly selects from among all words in the model’s vocabulary. We use R-ID (i.e., random indomain) to denote a baseline that randomly selects from amongst only the in-domain words (e.g., from colour terms); this is 6, 8 and 367 words respectively for the spatial, cardinal and colour categories. Note that the generative LMs do not have such a domain restriction over words, as they are free to choose any token from the full vocabulary (like R-IV).
3 RESULTS
3.1 GENERALISATION TO UNSEEN WORLDS
Our first investigation looks into how well models generalise known concepts to unseen worlds. For example, if a model has seen a few examples of a concept (such as left) depicted in some grid worlds, can it correctly identify an instance of tleft in a different grid world (e.g., one with a different size or orientation)? Note that we can only conduct this type of evaluation in the spatial and cardinal domains, since, for the colour domain, there is only ever one world, with one grounding for each concept (e.g., the colour red has exactly one grounding to a 3-digit RGB code in a 3-D RGB space).
Data We create prompts that include 20 examples of grounded concepts in a set of grid worlds. For each domain (e.g., cardinal directions that contain 8 concepts, and spatial terms that contain 3 pairs of 2 concepts), we include a (roughly) equal sample of concepts among these 20 examples. Then, we append a held-out grid world to the end of the prompt and evaluate whether or not the models generate the correct concept label for these unseen worlds. Figure 3 shows example prompts
given to the model and example generations from three different models. We report results averaged over all generations in Table 1. In general, we see the desired trend in which model performance on the original and rotated worlds is well above that on the random world. We do not see consistent or significant performance degradation when moving from the original to the rotated world, suggesting performance is not due to simple memorisation. Comparing across models, we see that the smaller models struggle to even learn concepts that were taught to them in a few-shot manner. For example, for the spatial category, the performance of the smallest model is below that of a baseline which guesses randomly among in-domain words, suggesting the model even fails to learn the general domain of the task (discussed more in Section 3.3). In contrast, the largest model (GPT-3) has a 45% Top-1 accuracy and a 76% Top-3 accuracy for the spatial category.
3.2 GENERALIZATION TO UNSEEN CONCEPTS
Our primary interest here is in the model’s ability to map its ungrounded conceptual structure to a grounded world. Specifically, we want to see that the model is able to ground an entire conceptual domain when only taught how to ground a small subset of the points in that domain. To test this, we show models example concepts within a sub-space of the domain (e.g., north, east), while holding out other concepts (e.g., south, west). We then test them on instances of held-out concepts. Figure 6 depicts this setup in the colour domain: we train the model using primarily shades of red, but test on shades of blue.
Data To create sub-spaces for the colour domain, we create prompts that contain 3 primary, 3 secondary colours, and 57 other colours within a sub-space, determined in the following way. For a certain colour (e.g., red) we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour. We create 6 such sub-spaces, centered around each of the primary and secondary colours, and report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. For the cardinal directions, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in an entirely different sub-space (e.g., south, west, southwest). The training split therefore contains worlds annotated with one sub-space, and we test on the remaining held-out concepts. We do this for all sub-spaces and average over them.
We report results on all concept categories in Table 2. As before, we see that models do not appear to be exploiting simple memorisation (evidenced by similar performance in original vs. rotated worlds) and that only the largest models appear capable of getting reasonable performance on the task. That said, the largest GPT-3 model achieves impressive results given the difficulty of the task. For example, it achieves over 40% Top-1 accuracy in the color domain, which requires generating a label like violet despite having never seen such a label during training (Figure 4).
3.3 ERROR ANALYSIS
Given the results above, we investigate “how wrong” models are when they fail to produce the expected ground-truth label. One clear trend of the large model is the ability to generate “on topic” (i.e., in-domain) responses, regardless of whether or not those responses are correct. For example, if the expected response to an input is “left”, a model which produces right is wrong in a very different way than a model that produces nonsensical outputs such as “[0[0” or function words such as “the” (see Figure 3). We see that the largest 175B parameter model almost always produces in-domain answers. We evaluate this by checking whether the generation lies in the set of related words for that category (e.g., all colours, spatial, or cardinal words, respectively). We see that the smaller models fail to do this. That is, their generations tend to be unrelated words that might have had high prominence (e.g., function words). On evaluating accuracy of generations being “in-domain”,
we see that the smallest model has only a 53% accuracy while the largest has a 98% accuracy of generating in-domain answers.
Second, we evaluate grounded distance (§2.5) to measure the degree of correctness of responses. In the colour domain, the colours dark red and wine are close enough in space that they might be intuitive alternate answers for one another. However, our top-1 and top-3 metrics only assess string matches and do not account for this. Thus, we look at the distance between the colour denoted by the predicted label (when it exists, i.e., when the model generated a legitimate colour name) and the colour the model was asked to label. The lower this distance is, the “less wrong” the model’s prediction is.
Table 3 reports evaluations measured by grounding distance for the colour domain. For every test instance, we compute the distance between the model predicted grounding and the true grounding as defined in Equation C.3. We then average over all computed distances to report one number that tells us how close, on average, model predictions are to the true grounding in the world. We show example visualisations of colours in RGB space that lie within a certain distance threshold of each other. We see, for the largest model, the distances between model-predicted groundings from the true grounding are significantly lower than random. Such a result suggests that the model’s errors are often justifiable and the scores given by our string-matching metrics might be an underestimate of the model’s true performance.
4 DISCUSSION
Our empirical results suggest that very large LMs (specifically, GPT-3), even when trained only on text, learn a conceptual space that can, at least in some settings, be “grounded” using only a small number of training examples. The fact that these models succeed even in isomorphic rotated worlds suggests that these models are not succeeding via naive memorisation. Rather, this suggests that they may be exploiting something about the conceptual structure of the space learned from text in order to map onto a new space that was not explicitly encountered during training.
A major limitation of our approach is that there are some grounded concepts (e.g., visual and sensory inputs) that cannot be easily encoded in text form. By construction, the LMs that we use are restricted to text-only inputs, thus our focus is on domains (e.g., colours and directions) that have a well-defined textual representation. This is only a small set of all the potential grounded concepts we would wish to teach LMs. Although many forms of data can be coerced into text format (e.g., we represent color using discrete digits to represent RGB space), complex concepts may loose fundamental aspects of their meaning when represented in this way. For example, for color, a coarse-grained notion of numeric proximity, derivable from text (Wallace et al., 2019; Naik et al., 2019), may be sufficient to differentiate the concepts we explore, but for more complex visual inputs (e.g., the output of a CNN image encoder), a text-based numeric representation is unlikely to capture the necessary degree of nuance. Future work would need to consider ways of adapting GPT-3-like models to accept non-textual inputs while still exploiting the text-based conceptual structure.
If such limitations were addressed, our results are suggestive of a potentially promising way in which text-only training could support general purpose, grounded models of meaning. Specifically, our results imply that the conceptual space a model learns from text might be nearly isomorphic to
what it would learn from interacting in a grounded world, and that models can be taught to map between those conceptual spaces without requiring explicit grounding for every concept. This is exciting, as domain-general text corpora are readily available, while domain general multimodal corpora–e.g., containing sufficient information on abstract concepts such as emotions (happy) or time (used to)–might be difficult or impossible to collect. If models like GPT-3 could be adapted to receive non-text prompts (as discussed above), our results suggest that the rich conceptual structure such models learn from text could be bootstrapped into powerful grounded models of language.
5 RELATED WORK
There is a significant amount of work that focuses on understanding how LMs represent and reason about concepts, as well as work that directly attempts to build models that take text inputs and ground them to elements in the world. We situate our work within these two bodies of literature: one that investigates how LMs understand linguistic phenomena and word meaning, and another, that attempts to situate language in models of the world. We describe each body of work below.
Meaning and Understanding in LMs With the advent of large LMs of increasing orders of magnitude, there has been speculation on the capabilities of such models, and whether they truly understand the meaning of the words they are learning representations for. Several works that attempt to probe linguistic phenomena in LMs, show that the representations learned by such models encode syntactic dependencies and coreference information (Tenney et al., 2019) and word-sense information (Chen et al., 2020). Work that investigates the ability of models to form word associations (Hwang et al., 2020), finds that large LMs can indeed perform such a task; suggesting that pretrained LMs not only recognize that entities are related, but can differentiate how they are related. Especially relevant to our work is recent work that investigates alignment of language models to colours (Abdou et al., 2021) or state changes (Li et al., 2021). Our work is complementary to this prior work, and we specifically ask whether the text space can be reliably mapped onto the grounded space.
Natural Language Grounding There is an increasing amount of work, usually at the intersection of NLP and fields like vision and reinforcement learning, that aims to use natural language to instruct agents about aspects of the world. In the case of vision, this could be to learn correspondences between language descriptions and pixels in an image (Eichenberg et al., 2021; Tsimpoukelli et al., 2021), or in the case of RL, to build agents that understand natural language instructions in order to take actions in a world that follow the instruction—for example to navigate to a goal (Artzi & Zettlemoyer, 2013; Patel et al.), or to solve tasks in different languages (Ku et al., 2020). Most of these tasks focus on training LMs from scratch, however usually with inputs that contain both textual information as well as grounded world information. Our work is different in that we attempt to take an LM that was previously trained only on text, and attempt to teach it a concept in the world without re-training it. With only a few samples of what the concept grounds to, we investigate how well large LMs can use the structure of language and associations between word forms in order to generalise to grounded concepts.
6 CONCLUSION
This work investigates the extent to which large language models, trained only on text can be taught to map previously learned word forms onto conceptual worlds. We investigate several generative language models in colour and direction domains, represented as discrete grid worlds given to the model as text input. With only a few examples of such grounded instances, although smaller models struggle to generalise from text to grounded concepts, we see that the largest model does indeed learn the space of concepts that we test it on. We analyse where models fail and see that the smallest models often produce random, unrelated outputs, but the errors made by the largest model are quite intuitive, for example, predicting colours very close in space to the true grounding. We discuss the limitations of focusing on text-only inputs to teach models grounded concepts, as well as the implications of such work for allowing large language models to be mapped, in a data-efficient way, to the grounded world.
APPENDIX
We provide, as supplementary material, additional information about the data generation, models used, examples of model generations, as well as additional results across all models.
A MODELLING DETAILS
We use a GPT-3 model Brown et al. (2020) and four GPT-2 Radford et al. (2019) models from the Hugging Face Transformer Wolf et al. (2019) library. Each of these is a pretrained autoregressive transformer model, trained on the OpenAI WebText corpus, containing around 8 million documents. The top 15 domains by volume in WebText are: Google, Archive, Blogspot, GitHub, NYTimes, Wordpress, Washington Post, Wikia, BBC, The Guardian, eBay, Pastebin, CNN, Yahoo!, and the Huffington Post. Individual model parameters and layers are shown in Table 5. The pretrained models use byte-pair encoding (BPE) tokens Sennrich et al. (2015) to represent frequent symbol sequences in the text, and this tokenisation is performed on all new input prompts to generate text from the model. We report the hyperparameters used by the pretrained model in Table 6.
Parameters Layers
Hyperparameter Selection
B DATA GENERATION
In this section we describe how we create prompt data for the concept categories we wish to evaluate. We describe what the “worlds” look like for each category, as well as detail on the generation of such worlds and division into train and test splits, for colours in Section B.1 and for both spatial and cardinal concepts in Section B.2.
B.1 COLOUR CONCEPTS
We draw from an existing dataset containing RGB codes associated with colour names, for 367 colours. In this concept category, a grounded realisation of a colour in the world, is simply its RGB code i.e., the point that it grounds to in 3-dimensional colour space. Therefore, all grounded examples in prompts follow the format of RGB: (x, y, z) followed by Colour: concept name , where the items in red are replaced with instances of RGB codes and names from the dataset of colours we useAbdou et al. (2021). This gives us a total of 367 samples of RGB codes paired with colour names. We create training and testing splits for different generalisation evaluations in the following way.
B.1.1 GENERALISATION TO UNSEEN CONCEPTS
Random Split We create prompts to perform this experiment in two ways. The first, as reported in §3.2, creates a “train split” i.e., samples within the prompt, by first selecting the 3 primary and 3 secondary colours to always be in the prompt. We then sample 64 other colours from the set of colours to be part of the train split. The prompt therefore contains 70 samples of the question and answer prefixes followed the RGB codes and colour names. For every sample in the test set, we create a new prompt that appends a question prefix and the RGB code of that sample to the end of the prompt, followed by the answer prefix (with no answer). For each prompt, the model is then required to generate an answer. Figure 6 shows the random sample of colours in the prompt and we report results on these in Appendix Table F.1.
Sub-space Split We then perform another experiment to evaluate a models ability to learn concepts within a sub-space, and generalise to a whole new sub-space. Here, the prompt contains 3 primary and 3 secondary colours, as well as 57 other colours that we select in the following way. For each of the 6 primary and secondary colours, we consider a sub-space in the world to be a space of colours that lies within a Euclidean distance of 150 from that colour.1 Since the model has seen the 6 primary and secondary colours, as well as a certain space of the colour world, we wish to evaluate how well the model can generalise to a new sub-space of the world, by using the colour
1We show visualisations of such colour sub-spaces in the main paper in Figure 4
word-form associations. We then report generalisation results averaged over the 6 sub-spaces. Figure 4 shows the sample of colours within a sub-space centered around the colour red, that serve as training samples within the prompt. We report results on these in the main paper in Table 2.
B.2 SPATIAL AND CARDINAL CONCEPTS
We consider a “world” to be a 2D matrix filled with 0s and containing exactly one 1, where the position of the 1 (i.e., the determining object) refers to the concept of interest (for e.g., a spatial location like “left” or a cardinal direction like “north”. We create grid worlds filled with 0s and one 1, of row and column ranges from (1, 8) giving us 672 total grid worlds.2 For each grid world, all grounded examples follow the format of World: [1. 0. 0. 0.] followed by Direction: concept name . where the items in red are replaced with instances of different grid worlds and corresponding concepts, based on the location of the 1. We create training and testing splits for different generalisation evaluations in the following way.
B.2.1 GENERALISATION TO UNSEEN WORLDS
Our primary investigation here is to assess whether the models can generalise to new worlds that differ in size, or location of the determining object. Each of the 672 worlds we create differs from one another in either aspect i.e., either the size of the world is different, or, when worlds are of the same size, the location of the 1 is in a different position. Therefore, we randomly sample 20 worlds that contain instances of concepts to serve as part of the prompt. The prompt therefore contains a question prefix followed by the grid world on a new line, followed by the answer prefix and the concept name. We ensure that there is a roughly equal distribution of concepts in the train split (e.g., if there are 8 concepts split over 20 samples, we ensure that 2 of each concept exist in the prompt, and then fill the remainder of the prompt by randomly sampling from the concepts). We then create a new prompt for every sample in the test set by appending the world representation and answer prefix to the end. We report results on this in the main paper in Table 1.
B.2.2 GENERALISATION TO UNSEEN CONCEPTS
Since we wish to assess a model’s ability to generalise to new concepts, here, we hold out concepts in the test set. Specifically, for every domain contain n concepts (e.g., 8 for cardinal directions), the train split contains an equal distribution of n − 1 concepts. We then test on samples that contain the last held-out concept, as well as the seen concepts, to assess how well the model has learned seen concepts, and generalises to unseen concepts. We report results on this in Appendix Table 14, and Figure 7 shows an example split of data that is held-out during test time. We note that this is a random split, unlike the sub-space split that specifically holds out a set of colours based on Euclidean distance.
B.2.3 GENERALISATION TO SUB-SPACES
Similar to the colours, for the cardinal directions, we also report in the main paper, results on a subspace generalisation task. Specifically, we show models examples of concepts in one sub-space of the world (e.g., north, east, northeast) and then test them on concepts in a different sub-space (e.g., south, west, southwest). We do this for all sub-spaces and average over them. As seen in Table 11, we see that the largest model can perform this task to some degree, achieving a 60% accuracy.
B.2.4 GENERALISATION TO COMPOSITIONS
Here, we specifically test model performance when holding out any instance of compositions of directions. Therefore, the training split contains examples of worlds annotated with the concepts north, south, east, west, and the test split contains examples of compositions of concepts i.e., northeast, southeast, northwest, southwest. We report results in Table 10, and we see that here, even the largest model fails to perform the task. This means that when models have never seen any instance of a composition, they do not perform these compositions themselves, however, as seen in Table
2We exclude grid worlds where the position of the 1 is in an ambiguous position for that concept category e.g., directly in the center for a cardinal task, or in the middle row for a spatial task.
10, and earlier results, when the training data contains some examples of compositions, models can indeed generalise to completely new compositions.
C EVALUATION METRICS
In this section we describe our evaluation metrics in detail, with examples of how certain generations might be scored by each metric.
C.1 SUBSTRING MATCH
For every test instance, there is exactly one string answer that is correct (for e.g., the words “left” or “northeast” or “electric blue”). However, language models are free to generate any number of tokens given a prompt—i.e., they might generate exactly one token, or up to a 100 tokens for any given prompt. We consider the first 5 tokens generated by the model to be the generated answer— and consider a substring match metric to be one that looks at whether or not the model generated answer lies in the ground truth answer. To make this clearer, we provide examples below, denoting a model-generated answer in blue and the ground-truth answer in green. By the substring metric, electric green for electric blue would give an accuracy of 0, however electric or green would
give an accuracy of 1. In practice, we do not see (the large) models generate half-answers (e.g,. simply saying electric). Similarly green for dark green has an accuracy of 1, but dark pink for
dark green would have an accuracy of 0.
C.2 EXACT MATCH
For this metric, the model only gets a perfect accuracy if the model generated answer exactly matches the ground-truth answer (after stripping both for any trailing whitespaces). Therefore saying green
for dark green , or electric green for light electric green would have an accuracy of 0.
C.3 GROUNDING DISTANCE
Equation C.3 shows our calculation of distances in the world. For a certain concept category C (for example, all colour names existing in the dataset), let c1 be the model predicted concept name and c2 be the true grounding. The grounding distance is the Euclidean distance between the two points when the predicted concept does exist in the space of concepts in the world, and is set to an arbitrarily high number (here, 500, which is higher than the maximum distance between any two points in space) when the predicted concept is some other word does not fall in the concept category.
d(c1, c2) = { √
(c1x − c2x)2 + (c1y − c2y)2 + (c1z − c2z)2 for c1 ∈ C 500 for c1 /∈ C
D PROMPT SIZE
The size of the input prompt to a model depends on its maximum content length, which is a hyperparameter fixed before training the model to completion. The question-answer pairs that we use to teach models grounded concepts differ in their lengths based on the concept category. For example, since the spatial and cardinal concepts require grid worlds that could be up to 10 rows and columns wide, a prompt for these categories might contain a fewer number of samples. Since the colour groundings are 3-digit RGB codes, a larger number of samples can be given in a prompt. We report accuracy curves of models when given an increasing number of samples in the prompt in Appendix D. We see that past 20 samples for spatial terms, and 60 samples for colours, models have no significant increase in performance. When we report results in Tables ?? and Table 1, we report numbers for a fixed number of samples in a prompt (e.g., 20 vs. 60 respectively) for all models.
Interestingly, once we get past a certain number of prompts, there is no significant increase in model performance on the task. This result hints at two things. First, the larger models seem to learn
a task, and generalise to new samples, with only a small number of sample when fine-tuned in a few-shot prompting regime. This seems to hold for the smaller models as well i.e., although they do not learn the task well, increasing the number of samples within a prompt does not significantly increase model performance. Second, the significant difference in performance based on model size hints at the fact that the limiting factor for good performance is the number of parameters of a model. More concretely, the generalisation extent of each model seems to max out at some number of input prompts, with a trend of increasing performance on increasing model size. Therefore, a larger model (than the ones we have here) might be required in order to achieve better grounded generalisation.
E MODEL GENERATIONS
We show example generations from the model in Table 8.
F PRE-TRAINING DATA
F.1 WHAT HAVE MODELS SEEN IN THEIR TRAINING DATA THAT MIGHT BE RELATED TO GROUNDING?
Pre-trained language models have been trained on large text corpora available on the internet—a diverse source of information that covers many domains. There is speculation, that the remarkable generalisation performance of these models stems from having seen instances of the task or related information at some point in their training data. However, this data has not been made public, which makes this a hypothesis that is hard to confirm. In our domain of grounded concepts as well, it is unclear what aspects of similar data the model might have had access to during training, or what inductive biases from certain types of data it might have learnt, that allow it good performance on grounded tasks. We attempt to assess this in the following way. We consider all the concept words in every concept category, and all the world representations, and consider these the prompts for the models. For each word, we wish to assess the distribution of text generated, which should, in some sense, reflect the distribution of text seen during training. For example, or the world representations (i.e., a 2D array), we assess how often the model might use a spatial or cardinal term when simply prompted with an array, thus giving us insights into how often the training data might have contain such an array in close proximity to such a word. Similarly, for the colour domain, we evaluate how often the model predicts a correct colour name when prompted with an RGB code. We see that when models are simply prompted with a world representation, they tend to generate similar-
looking world representations, instead of conceptual words (such as “left”) that might be associated with it. When prompted with RGB codes, the GPT-3 model only has a 7.3% accuracy of correctly generating the corresponding colour name in a 5-token sequence after the prompt.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.03 0.02 0.02 Top-1 355 M 0.03 0.04 0.03 Accuracy 774 M 0.04 0.03 0.02 1.5 B 0.07 0.06 0.05
175 B 0.12 0.11 0.10
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.03 0.04 0.04 Top-3 355 M 0.06 0.05 0.04 Accuracy 774 M 0.05 0.07 0.05 1.5 B 0.11 0.10 0.07
175 B 0.23 0.21 0.17
Table 9: Table shows evaluations in the cardinal directions domain, where we show models examples of single directions (north, south, east, west) and test them on compositions of directions (northeast, southeast, northwest, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that all models fail on this task i.e., when they have never seen compositions of terms before, they never predict them at test-time.
Cardinal
Original Rotated Random
R 0.00 0.00 0.00 R* 0.13 0.13 0.13
124 M 0.11 0.10 0.05 355 M 0.10 0.11 0.06 774 M 0.13 0.12 0.08
1.5 B 0.13 0.14 0.10 175 B 0.30 0.29 0.08
R-IV 0.00 0.00 0.00 R-ID 0.13 0.13 0.13
124 M 0.09 0.09 0.08 355 M 0.19 0.17 0.10 774 M 0.17 0.15 0.11
1.5 B 0.21 0.20 0.14 175 B 0.60 0.61 0.09
Table 10: In this table, we show models examples of concepts within a sub-space (north, east, northeast) and test them on concepts in a different sub-space (south, west, southwest) that were never seen before. Rows show model sizes for GPT-2 models (124M to 1.5B) and the 175B GPT-3 model. Columns show metrics for the original, rotated and random worlds for each of the concept categories. We see that the largest model can indeed perform on this task i.e., even on only having seen a sub-space of the world (along with compositions) it can generalise to a new sub-space and compositions.
G EXPERIMENTS WITH MASKED LANGUAGE MODELS
The results in the main paper focus on GPT-2 and GPT-3 language models i.e., autoregressive left-toright language models that show impressive capabilities for in-context learning. Here, we provide comparisons to a masked language model, specifically a BERT model (Devlin et al., 2018). For our implementation we use a BERT-base model containing 110 million parameters. Similar to the GPT-x experiments for in-context learning with prompts, we ”teach” the BERT model the space on concepts in the following way.
• BERT zero-shot (no-context) In this setting, for every sample in the test set (e.g., an RGB colour), we create an input prompt that masks the name of the colour, and ask the model to fill in the colour name. The prompt, therefore, would look like RGB: (255, 0, 0) Colour: <mask>. For the same test set, we evaluate the Top-1 and Top-3 accuracy of the model filling in the mask with the correct colour, using the same substring-match metric. • BERT zero-shot (in-context) In this setting, we add several samples into the context of the model, and ask the model to fill in the masked token with the colour name, for all samples in the test set. Since the context-width of BERT is smaller than the GPT models, we can only include a smaller number of samples (around 7).
• BERT zero-shot (fine-tuned) In this setting, we fine-tune the BERT model on 67 samples (the same number of samples that the GPT models got as input in a prompt) and then report performance when tested on all the colours in the test set.
Table 13 shows results of the BERT models when tested on a held-out sub-space of colours and Table 14 shows results when tested on a randomly held-out split of colours. | 1. What is the focus of the paper regarding large pre-trained language models?
2. What are the strengths of the proposed approach, particularly in terms of empirical results and experimental design?
3. Do you have any concerns or worries about potential confounds in the experiments?
4. How do the authors address the issue of confounding variables in their experiments?
5. What are your thoughts on the possibility of huge parameter language models inducing mechanisms for performing spatial rotations on subspaces?
6. Are there any limitations or areas for improvement in the color experiment?
7. Why did the authors choose to insert 67 other colors into the color prompts?
8. Can you provide more information on how grids are incorporated into the model?
9. Were there any typos or minor errors in the paper that caught your attention? | Summary Of The Paper
Review | Summary Of The Paper
This work seeks to probe large pre-trained language models for indications that they have induced a conceptual space that is structured similarly to one constructed from interactions with the real-world, despite being trained solely from text. Most of this probing takes the form perceptual tasks -- can LMs quickly learn new concepts related to navigation, or colors, if given some related examples from the space? In both navigation and color tasks, this seems to be the case, and a consistent trend emerges where larger LMs perform better on this task. The experiments seem well-controlled, and the authors present an insightful discussion of model performance.
Review
Overall I really enjoyed this paper. The scope was fairly narrow and the experiments were not diverse enough to provide a sense of a thorough understanding of how LM's are forming this space, what possible confounds might be at play, or what other domains might have similar effects. But apart from these concerns, the paper is well-written and presented in an organized manner, and essentially all the discussion points contributed something to my understanding of the work. The topic is also very timely and important, and the implications of this work could help unify work in NLP, vision, and other grounded/situated learning.
An obvious main strength of the paper is that empirically the results are clear and consistent across all experiments. The similarities between induced and gold spaces, between smaller models and larger models, is very significant. The experiments themselves are quite simple and straightforward, but my biggest worry -- that confounds from the LM training data may be leaking in and influencing these conceptual spaces -- were strongly considered by the authors. Their solution of using rotations to test against isomorphic spaces makes intuitive sense. One thing I am considering is whether it's possible that huge parameter language models may be inducing mechanisms for performing spatial rotations on subspaces, though at some point it might become difficult to separate a model's ability to memorize the data and perform on-the-fly rotations, from something that might be called a non-trivial understanding of these concepts/spaces. That is mostly an aside as I can't present it as anything more than speculation.
One of the few questions about the color experiment results was to what extent the model was backing off to predict less specific color names, since it is something that is not well-captured in top-1 accuracy (though maybe some combination of this and distance evaluation starts to get at it).
And why were 67 other colors inserted in the color prompts? It seemed arbitrary.
How are grids incorporated into the model? In Appendix B2, it appears to be just as a string following "World", but is it a "linearized" string of one row after another?
Typos: Zellers reference "within the prompt ." |
ICLR | Title
TRAKR – A reservoir-based tool for fast and accurate classification of neural time-series patterns
Abstract
Neuroscience has seen a dramatic increase in the types of recording modalities and complexity of neural time-series data collected from them. The brain is a highly recurrent system producing rich, complex dynamics that result in different behaviors. Correctly distinguishing such nonlinear neural time series in real-time, especially those with non-obvious links to behavior, could be useful for a wide variety of applications. These include detecting anomalous clinical events such as seizures in epilepsy, and identifying optimal control spaces for brain machine interfaces. It remains challenging to correctly distinguish nonlinear time-series patterns because of the high intrinsic dimensionality of such data, making accurate inference of state changes (for intervention or control) difficult. Simple distance metrics, which can be computed quickly do not yield accurate classifications. On the other end of the spectrum of classification methods, ensembles of classifiers or deep supervised tools offer higher accuracy but are slow, data-intensive, and computationally expensive to train and deploy. We introduce a reservoir-based tool, state tracker (TRAKR), which offers the high accuracy of ensembles or deep supervised methods while preserving the computational benefits of simple distance metrics. After one-shot training, TRAKR can accurately, and in real time, detect deviations in test patterns. By forcing the weighted dynamics of the reservoir to fit a desired pattern directly, we avoid many rounds of expensive optimization. Then, keeping the output weights frozen, we use the error signal generated by the reservoir in response to a particular test pattern as a classification boundary. We show that using this approach, TRAKR accurately detects changes in synthetic time series. We then compare our tool to several others, showing that it achieves classification performance on par with supervised deep networks on a benchmark dataset–sequential MNIST–, while outperforming all other approaches. When the samples are corrupted by noise, our approach maintains relatively high performance, while supervised deep networks show a sharp decline in performance. We also apply TRAKR to electrocorticography (ECoG) data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding the value of expected outcomes. We show that TRAKR can classify different behaviorally relevant epochs in the neural time series with high accuracy. Altogether, we show that TRAKR is a high performing tool for distinguishing patterns in complex nonlinear time-series data, such as neural recordings. With its high performance, robustness to noise, low trainand inference-time, and ease-of-use, it offers a viable alternative to more complex state-of-the art approaches, particularly for real-time applications.
1 INTRODUCTION
The size and complexity of neural data collected has increased greatly (Marblestone et al. (2013)). Neural data display rich dynamics in the firing patterns of neurons across time, resulting from the recurrently connected circuitry in the brain. As our insight into these dynamics increases through new recording modalities, so does the desire to understand how dynamical patterns change across time and, ultimately, give rise to different behaviors.
A lot of work in computational neuroscience over the past decade has focused on modeling the collective dynamics of a population of neurons in order to gain insight into how firing patterns are related to task variables (Márton et al. (2020); Richards et al. (2019); Yang et al. (2018); Remington et al. (2018); Kell et al. (2018); Zeng et al. (2018); Pandarinath et al. (2018); Durstewitz (2017); Chaisangmongkon et al. (2017); Rajan et al. (2016); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013); Barak et al. (2013); Sussillo & Abbott (2009)). These approaches, however, rely on fitting the whole dynamical system through many rounds of optimization, either indirectly by modeling the task inputs and outputs (Márton et al. (2020); Kell et al. (2018); Chaisangmongkon et al. (2017); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013), or directly by fitting the weights of a neural network to recorded firing patterns (Pandarinath et al. (2018); Durstewitz (2017)). Thus, these approaches can be too time- and computation-intensive for certain applications, e.g. in clinical settings where decisions need to be taken based on recordings in real-time. In these settings, neural time-series patterns need to be accurately distinguished in order to, say, detect the onset of seizures, or distinguish different mental states.
Previous approaches to classifying time series lie on a spectrum from simple distance metrics (e.g., Euclidean) to more computationally intensive approaches such as dynamic time warping (Xing et al. (2010)), ensembles of classifiers (Bagnall et al.) or deep supervised learning (Jeong (2020); Fawaz et al. (2019)). Computing simple distance metrics is fast and straightforward, but does not always yield high accuracy results because the patterns may not be perfectly aligned in time. On the other end of the spectrum, ensembles of classifiers and deep learning-based approaches (Bagnall et al.; Jeong (2020); Fawaz et al. (2019)) have been developed that can offer high accuracy results, but at high computational cost. Dynamic time warping (DTW) has been consistently found to offer good results in practice relative to computational cost (Fawaz et al. (2019); Bagnall et al. (2016); Serrà & Arcos (2014)) and is currently routinely used to measure the similarity of time-series patterns.
Previous work in reservoir computing has shown that networks of neurons can be used as reservoirs of useful dynamics, so called echo-state networks (ESNs), without the need to train recurrent weights through successive rounds of expensive optimization (Vlachas et al. (2020); Pathak et al. (2018); Vincent-Lamarre et al. (2016); Buonomano & Maass (2009); Jaeger & Haas (2004); Jaeger (a;b); Maass et al. (2002)). This suggests reservoir networks could offer a computationally cheaper alternative to deep supervised approaches in the classification of neural time-series data. However, the training of reservoir networks has been found to be more unstable compared to methods that also adjust the recurrent connections (e.g., via backpropagation through time, BPTT) in the case of reduced-order data (Vlachas et al. (2020)). Even though ESNs have been shown to yield good results when fine-tuned (Tanisaro & Heidemann (2016); Aswolinskiy et al. (2016)), convergence represents a significant problem when training ESNs end-to-end to perform classification on complex time-series datasets, and is a hurdle to their wider adoption.
Here, we propose fitting the reservoir output weights to a single time series - thus avoiding many rounds of training that increase training time and could potentially cause instabilities. We use the error generated through the output unit in response to a particular test pattern as input to a classifier. We show that using this approach, we obtain high accuracy results on a benchmark dataset - sequential MNIST - outperforming other approaches such as simple distance metrics (e.g., based on Euclidean distance or Mutual Information) and a previous approach based on echo-state networks, while performing on par with deep supervised networks. We also show that our approach is more robust to noise than other approaches, in particular deep supervised networks. At the same time, TRAKR achieves high performance while keeping training and inference time low.
We also apply our tool, TRAKR, to neural data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding expectations, and inducing changes in behavior during unexpected outcomes (Rich & Wallis (2016); Rudebeck & Rich (2018); Jones et al. (2012); Wallis (2012); Schoenbaum (2009); Burke et al. (2009); Wallis & Miller (2003); Schoenbaum et al. (1998)). This data consists of 128-channel electrocorticography (micro-ECOG) recordings obtained from the macaque OFC, including anterior and posterior areas 11 and 13, during a reward expectation task. The task was designed to understand if and how expectations encoded in OFC are updated by unexpected outcomes. TRAKR is able to distinguish three different behaviorally relevant epochs based on the neural time-series with high accuracy, revealing that OFC units differentiate between different task episodes. This shows it can be used as a reliable tool to gain further insight into the information encoded in neural circuits.
Taken together, we make the following contributions: - We show that by fitting single time series and working with the error signal reservoirs can perform on par with supervised deep networks in time series classification - We show that this approach, while easy to use, outperforms other commonly used approaches, such as Dynamic Time Warping (DTW), and other distance measures embraced for their simplicity - We show that our approach is more robust to noise than other approaches - Our approach is computationally less expensive than other approaches, achieving high performance at low train- and inference- time - It is able to detect differences between neural patterns with high accuracy
Altogether, this shows that TRAKR is a viable alternative to state-of-the-art approaches for time series classification.
2 METHODS
2.1 MODEL DETAILS
TRAKR (Figure 1A) is a reservoir-based recurrent neural network (RNN) with N recurrently connected neurons. Recurrent weights, J , are initialized randomly and remain aplastic over time (Buonomano & Maass (2009); Jaeger (b); Maass et al. (2002)). The readout unit, zout, is connected to the reservoir through a set of output weights, wout, which are plastic and are adjusted during training. The reservoir also receives an input signal, I(t), through an aplastic set of weights win.
The network is governed by the following equations:
τ dxi(t)
dt = −xi(t) + g N∑ j=1 Jijϕj(t) + wi,inI(t) (1)
zout(t) = ∑ i wout,i(t)xi(t) (2)
Here, xi(t) is the activity of a single neuron in the reservoir, τ is the integration time constant, g is the gain setting the scale for the recurrent weights, and J is the recurrent weight matrix of the
reservoir. The term g N∑ j=1 Jijϕj(t) denotes the strength of input to a particular neuron from other neurons in the reservoir and I(t) is the input signal (Equation 1). zout(t) denotes the activity of the readout unit together with the output weights, wout (Equation 2). In our notation, wij denote the weight from neuron j to i, and so wout,i means the weight from ith unit in the reservoir to the readout unit. ϕ is the activation function given by:
ϕi(t) = tanh(xi(t)) (3)
We use recursive least squares (RLS) to adjust the output weights, wout during training (Haykin, Simon S. (1996)). The algorithm and the update rules are given by:
∆wout,i(t) = −η(zout(t)− f(t)) ∑ j Pij(t)ϕj(t) (4)
wout,i(t) = wout,i(t− 1) + ∆wout,i(t) (5)
Here, η is the learning rate, f(t) is the target function, and the term ∑
j Pij(t)ϕj(t) acts as a regularizer where P is the inverse cross-correlation matrix of the network firing rates. For details on setting hyperparameters, see Appendix A.
2.2 ADJUSTING RESERVOIR DYNAMICS
During training, the output weights, wout, are optimized using RLS based on the instantaneous difference between the output, zout(t), and the target function, f(t). This optimization is performed in one shot (without the need for multiple optimization rounds). Here, we use the reservoir to autoencode the input signal, thus f(t) = I(t). The instantaneous difference gives rise to an error term, E(t), calculated as:
E(t) = N∑ i=1 ∆wout,i(t) (6)
2.3 OBTAINING THE ERROR SIGNAL
After training, the output weights, wout are frozen. The test pattern is fed to the network via the input, I(t), and the network is iterated to obtain the error, E(t) over the duration of the test signal. The error, E(t) is computed as the difference between the test signal and the network output (Equation 6). The error may vary depending on the similarity of a given test signal to the learned time series. The error is used as input to a classifier.
2.4 CLASSIFICATION OF THE ERROR SIGNAL
The error, E(t), is used as input to a support vector machine (SVM) classifier with a Gaussian radial basis function (rbf) kernel. The classifier is trained using 10-fold stratified cross-validation. The same classifier and training procedure was used in comparing the different approaches. Naive Bayes as well as the neural network-based approaches (multilayer perceptron (MLP) and time warping invariant echo state network (ESN)) are directly used as classifiers, again with 10-fold stratified cross-validation. Accuracy and area under the curve (AUC) are computed as a measure of classification performance. MacBook Pro CPU was used for all the comparisons.
2.5 NEURAL RECORDINGS
2.5.1 TASK DESIGN
Neural recordings were obtained from the macaque OFC using a custom designed 128-channel micro-ECOG array (NeuroNexus), with coverage including anterior and posterior subregions (areas 11/13). During preliminary training, the monkey learned to associate unique stimuli (natural images) with rewards of different values. Rewards were small volumes of sucrose or quinine solutions, and values were manipulated by varying their respective concentrations.
The behavioral task design is shown in (Figure 4A). During the task, the monkey initiated a trial by contacting a touch-sensitive bar and holding gaze on a central location. On each trial, either one or two images were presented, and the monkey selected one by shifting gaze to it and releasing the bar. At this point, a small amount of fluid was delivered, and then a neutral cue appeared (identical across all trials) indicating the start of a 5s response period where the macaque could touch the bar to briefly activate the fluid pump. By generating repeated responses, it could collect as much of the available reward as desired. There were two types of these trials. Match (mismatch) trials were those where the initial image accurately (did not accurately) signal the type of reward delivered on that trial. Behavioral performance and neural time series were recorded in 11 task sessions across 35 days. For further details on trials and data preprocessing, see Appendix B.
3 RESULTS
3.1 DETECTING PATTERN CHANGES IN SYNTHETIC TIME SERIES
Wout frozenWout frozen
Wout frozenWout frozen
Wout plasticWout plastic
Tr ai
n
Tr ai
n
BA
First, we trained TRAKR on idealized, synthetic signals using sin-functions of two different frequencies (Figure 2). Reservoir output weights were fitted to the signal using recursive least squares (RLS; see subsection 2.1). In Figure 2A, the network was trained on the first half of the signal (blue) while the output weights, wout, remained frozen during the second half. Then with wout frozen, a test signal (orange) was fed to the reservoir. The network output, zout(t), in red and the error signal, E(t), in green are depicted in Figure 2. The network correctly detects the deviation of the test pattern (orange, 1st half of the signal) from the learned pattern (blue, 1st half of the signal), which results in an increase in the error signal (green, 1st half of the signal, Figure 2A). The second half of the test signal (orange) aligns with the trained signal (blue, 1st half) and thus yields no error (green, 2nd half). In Figure 2B, the order of the training procedure was reversed in that output weights remained frozen for the first half of the signal (blue) and were plastic during the second half. As expected,
the increase in the error signal (green) now occurs during the second half of the test signal (orange). Thus, TRAKR correctly detects, via the error signal E(t), when a new frequency pattern occurs in the test signal that deviates from the trained pattern.
3.2 CLASSIFYING DIGITS - SEQUENTIAL MNIST
We then applied TRAKR to the problem of classifying the ten digits from sequential MNIST, a benchmark dataset for time-series problems (Le et al. (2015); Kerg et al. (2019)).
For training, we used the entire dataset of 1000 sequential MNIST digits including 100 samples for each digit (0-9). We fed each sequential digit (28 x 28 pixel image flattened into a vector of length 784) as a one-shot training signal to TRAKR. Reservoir output weights were again fitted to a single digit using recursive least squares (RLS; see subsection 2.1). After fitting TRAKR to one time series corresponding to one of the samples of a particular digit, we froze the output weights and fed all the other digits as test samples to TRAKR. We obtained an error signal, E(t), from every test sample, with the magnitude of the error varying depending on the similarity with the learned digit. The error signal was then fed into a classifier which was trained to differentiate the digits based on the error terms (see subsection 2.4 for more details). We repeated this procedure for all digits and samples in the dataset to obtain the averaged classification performance for TRAKR (Figure 3A).
We found that TRAKR achieves a performance of AUC = 99% on this dataset (Figure 3A). We compared our approach with other commonly used methods for the classification of time series. We compared our results against supervised deep neural networks (MLP; as in Fawaz et al. (2019)), a recent echo-state network based approach (twiESN; as in Fawaz et al. (2019)), distance measures such as Dynamic Time Warping (DTW; Rakthanmanon et al. (2012)), Euclidean distance Euc), Mutual information (MI), and a naive Bayes classifier (NB). With the exception of MLPs, which showed performance on par with TRAKR, we found that all other approaches performed significantly worse than TRAKR (p < 0.001; see subsection 2.4 for further details).
We also tested the performance of TRAKR under different noise levels added to training digits (Figure 3B, Appendix C). We again compared TRAKR against all the other approaches and found TRAKR to perform the best, particularly at higher noise levels: performance decays gradually as the noise is increased and is at AUC = 75% even at higher noise levels (σ = 1). In particular, we found TRAKR performs better than MLPs at higher noise levels.
We also measured training and inference time, comparing TRAKR to the other approaches introduced above (Table 1). We found that training time for TRAKR is situated at the lower end of the spectrum, slower than it takes to obtain a measure of the distance between two traces using DTW, MI or Euc, but significantly faster than it takes to train the MLP or twiESN. While it does require upfront fitting, our approach has the advantage that it does not require multiple rounds of optimization (like MLP or twiESN) because the signal is fit in one shot (see subsection 2.2 for details). This yields relatively fast training time.
After fitting, TRAKR can detect deviations from the learned signal in real-time. The inference time of our approach again lies in between computationally relatively more expensive approaches such as MLP and twiESN and less intensive approaches such as DTW, MI and Euc. We also measured the train and inference time of the classifier we use (SVM) and found that it does not significantly impact training or inference time; other, relatively more simple classifiers such as kNN (k-Nearest Neighbors) may also be used instead, for further gains in speed. We compared all of these approaches under the same conditions (see subsection 2.4). Also, see Appendix D for details on calculation of training and inference time.
3.3 PERFORMANCE ON NEURAL TIME SERIES RECORDED FROM THE MACAQUE OFC
The OFC is involved in encoding and updating affective expectations. We used a behavioral task designed to study the neural mechanisms of how such expectations are encoded in the OFC of primates and how they may guide behavior under different conditions. The goal here was to determine whether TRAKR can be used to reliably distinguish complex neural time series patterns (Figure 4A; see also subsubsection 2.5.1 for more details).
A recording from a single electrode is shown together with three different behaviorally relevant epochs (rest, choice and reward periods; Figure 4B). We compared the performance of TRAKR with all other previously introduced methods on the task of trying to distinguish the neural time series patterns in these three epochs. It is important to note that we did not know a priori whether there is enough information encoded in the OFC recordings to allow for differentiating the three epochs. Thus, a high performing classifier can offer clues here as to what type of information is encoded in the brain.
We trained TRAKR on the neural time series corresponding to rest period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). The error signal was used as input to a classifier. We repeated this procedure for all trials in the dataset to obtain the averaged classification performance. We also calculated the Fast Fourier transform (FFT) of the signals and obtained the magnitude (power) in the α (0 − 12Hz), β (13 − 35Hz), and γ (36 − 80Hz) bands within the 3 epochs. We compared TRAKR against this FFT based classification too.
Again, with the exception of MLP which performed on par with TRAKR, we found that TRAKR outperformed all the other methods (AUC = 94%; p < 0.001; Figure 4C). TRAKR was able to distinguish the three epochs with high accuracy based on the neural signal, showing that there is enough information in the OFC to differentiate these patterns. It is important to note that based on the performance of lower performing approaches, such as DTW or Euc, one might have (wrongly) concluded that the OFC lacks enough information to solve this task. Overall, with its high per-
formance, TRAKR can be used as a reliable tool to differentiate time series patterns and generate hypotheses on the type of information represented in neural circuits.
We also investigated whether any of the methods could distinguish between match and mismatch trials based on the OFC signals (Figure 4D). For this purpose, we trained TRAKR on the neural time series corresponding to choice period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). We found that all methods performed at chance-level, indicating there is not enough information in the OFC recordings to differentiate the two. We also observed that the classification performance for the three epochs degrades over days (Figure 4E; blue & red solid lines), while that for match/mismatch trials consistently stays around chance-level (Figure 4E; blue & red dotted lines).
Lastly, we visualized different electrodes in the space spanned by the first three principal components of the reservoir’s activations (Appendix E). We fitted the reservoir to signal obtained from a particular electrode, froze the output weights, and projected other electrodes onto the first three principal components of reservoir activity. We found that electrodes trace out different paths in reservoir space. Thus, as a proof-of-concept, projections onto reservoir space can be used to visually inspect
the similarity of recordings from different electrodes, and identify differences that may be indicative of functionally meaningful sub-groupings that represent functionally coherent modules in the brain.
4 DISCUSSION
We have shown that TRAKR can distinguish time series patterns with high accuracy. TRAKR outperforms other approaches in classifying time-series data on a benchmark dataset, sequential MNIST, and in differentiating neural time series signals obtained from recordings in the macaque OFC. It performs on par with supervised neural networks (MLPs), while outperforming other types of echo state networks (twiESN) and commonly used distance measures such as Dynamic Time Warping (DTW). Meanwhile, we found that our approach is more robust to noise than other approaches, in particular supervised neural networks, and offers good performance in terms of training and inference time.
We found that TRAKR offers high accuracy at relatively lower training and inference times than other approaches with comparable accuracy such as supervised multi-layer neural networks. While TRAKR relies on a classifier on top of the error traces to perform the classification task, other classifiers such as kNN may be used to maximize training and inference speed, instead of the SVM classifier employed here. All approaches were compared in the same environment, but all neural network-based approaches may equally benefit from further gains in training and inference speed by optimising the code for deployment on GPUs. For this purpose, TRAKR can readily be implemented in JAX, a high-performance computing framework (Bradbury et al. (2018).
Meanwhile, other approaches based on echo-state networks, such as twiESN (Fawaz et al. (2019)), performed worse than TRAKR and showed higher train and inference times. This suggests our contribution is key to making reservoir networks a viable alternative to state-of-the-art approaches in time series classification.
While TRAKR and most other methods could distinguish three behaviorally relevant epochs based on the OFC signal, none of the methods were able to accurately distinguish match and mismatch trials. This indicates that there is enough information in the OFC to distinguish the three task periods, but not enough to differentiate match and mismatch trials. It is possible that receiving a better or worse reward than expected affected the neural signal in distinct/opposite ways, such that the effect was cancelled out on average. It is also possible that the difference in neural time-series patterns was only discernible if the reward was maximally different (much better or worse than expected). In the current task design, there were 4 different levels of reward (flavors) that the macaque associated with different pictures (subsubsection 2.5.1). The number of trials in which obtained was maximally different from expected reward was low and possibly not sufficient for accurate classification. Another possibility, supported by our results and corroborated by several studies (Stalnaker et al. (2018); McDannald et al. (2014); Takahashi et al. (2013); Kennerley et al. (2011)), is that OFC neural activity signals reward values but not reward prediction errors, which instead are mediated through the ventral tegmental area (VTA) in the midbrain.
We found that the classification performance decreased over recording sessions. This could mean that the difference between task epochs being classified decreased because of increased familiarity with the task. That is less likely, however, because the subject was well-trained prior to recordings. Instead, since the signal was recorded over a period of 35 days, the decrease in the classification performance could be a result of degrading signal quality, perhaps due to electrode impedance issues (Kozai et al. (2015a;b); Holson et al. (1998); Robinson & Camp (1991)).
5 CONCLUSION
There is a need for and strong interest in tools for the analysis of time-series data (Bhatnagar et al. (2021)). We show that TRAKR is a fast, accurate and robust tool for the classification of time-series patterns. Through its ease of use and low training and inference time, it is particularly suited for realtime applications where accurate decisions need to be made quickly and signal degradation or other artifacts necessitate frequent re-calibration, such as in clinical settings. TRAKR can also be used to distinguish neural time series patterns in the brain, shedding light on the information encoded in neural circuits and thus generating hypotheses for new experiments.
A TRAKR HYPERPARAMETERS
The recurrent weights Jij are weights from unit j to i. The recurrent weights are initially chosen independently and randomly from a Gaussian distribution with mean of 0 and variance given by g2/N . The input weights win are also chosen independently and randomly from the standard normal distribution.
An integration time constant τ = 1ms is used. We use gain g = 1.2 for all the networks.
The matrix P is not explicitly calculated but updated as follows:
P (t) = P (t− 1)− P (t− 1)ϕ(t)ϕ ′(t)P (t− 1)
1 + ϕ′(t)P (t− 1)ϕ(t)
The learning rate η is given by 1
1 + ϕ′(t)P (t)ϕ(t) .
The number of units used in the reservoir is generally N = 30.
B MACAQUE TRIAL DETAILS & DATA PRE-PROCESSING
Each trial was approximately 6.5s long, including different behaviorally relevant epochs and cues. The macaque performed approximately 550 trials within each task session (mean± sd : 562± 72). Of note, 80% of the trials were match trials within each task session.
ECoG data were acquired by a neural processing system (Ripple) at 30kHz and then resampled at 1kHz. The 128-channel data were first z-score normalized. Second-order butterworth bandstop IIR filters were used to remove 60Hz line noise and harmonics from the signal. We also used second-order Savitzky-Golay filters of window length 99 to smooth the data and remove very high frequency juice pump artifacts (> 150Hz). For most of the analysis here, we used the average of the 128-channel time series as an input to TRAKR.
C DETAILS ON SEQUENTIAL MNIST CLASSIFICATION
For measuring noise robustness, we added random independent Gaussian noise to the training digits (µ = 0 and varying standard deviation (σ)). The actual noise that was added (noise levels as depicted in Figure 3B) can be calculated as σ ∗ 255, with σ ∈ [0, 1]. The number 255 represents the maximal pixel value in the sequential digits.
D CALCULATION OF TRAINING AND INFERENCE TIME
The calculation of training time for TRAKR includes the time taken to train one sequence through TRAKR along with 10-fold cross validation time using SVM. For all the other methods, it is also the time taken to train using 10-fold cross validation. For MLP, the training is done using Keras which is optimized, whereas TRAKR has further room for optimization by implementing it in JAX etc.
The calculation of inference time for TRAKR includes the time taken to feed one test sequence into TRAKR and making predictions on a test set using SVM. For all the other methods, the time includes making predictions on the same test set.
E VISUALIZATION OF RESERVOIR ACTIVATIONS
Single electrode recordings were projected into the space spanned by the first three principal components of reservoir activations. The four electrodes trace out different trajectories in reservoir space, suggesting they capture potentially different neural dynamics, as shown in the figure below.
PC1 PC2
PC3
Electrode 1 Electrode 13
Electrode 76 Electrode 127 | 1. What is the focus of the paper regarding time segment classification?
2. What are the strengths of the proposed approach, particularly in terms of training time advantage?
3. What are the weaknesses of the paper, especially regarding the technical contribution and experimental setup?
4. Do you have any concerns regarding the choice of reference time segments or sequences?
5. How does the reviewer assess the performance of TRAKR compared to conventional methods and other deep learning models? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes using reservoir computing for classifying time segments. The key advantage is less training time. The approach, TRAKR, is evaluated on simulated timeseries, sequential MNIST, and ECoG with increased accuracy over conventional methods shown.
Review
Strength
The paper is clearly written and easy to follow.
The proposed approach is simple, easy to implement, and already provides a gain over conventional methods.
Weaknesses
The technical contribution is unclear, since reservoir computing is not new. Would be interesting to see if we learn an embedding space (using simpler methods) with a given set of reference time segments and use that embedding to reconstruct other segments to generate error signals, whether we could achieve the same gain as reservoir computing. This experiment can help highlight the need for reservoir computing.
It is unclear what would be a good choice of “reference” time segments/sequences. For ECoG, rest period would make sense, but is there a way to decide in the general setting? Is the contribution of this work actually the use of a reference to generate error signals for classification, which tends to increase accuracy?
It is unclear what the simulated timeseries experiment is demonstrating. If we overfit the training time segments, by construction, we would get huge error signals on test segments, which would enable us to distinguish the two.
For the sequential MNIST experiment, it is not clear which digit would be a good reference and how results change with this choice. Nevertheless, getting an AUC of 99% with such a simple approach is quite impressive. If we use 10-fold cross validation, does the high AUC stays? Sometimes leave-one-out provides error estimates similar to training errors.
Though TRAKR is fitted in one shot, does performance improve with a few more rounds? In general, when deploying a model real time, we care more about the amount of time it takes to generate a prediction from the model, and not so much its training time (unless the amount of training time is impractical). Hence, comparisons with other DL models are needed to show TRAKR’s performance is at least on par.
Overall, seems like TRAKR works well for relatively simple classification tasks but breaks down on harder tasks like match vs. mismatch. |
ICLR | Title
TRAKR – A reservoir-based tool for fast and accurate classification of neural time-series patterns
Abstract
Neuroscience has seen a dramatic increase in the types of recording modalities and complexity of neural time-series data collected from them. The brain is a highly recurrent system producing rich, complex dynamics that result in different behaviors. Correctly distinguishing such nonlinear neural time series in real-time, especially those with non-obvious links to behavior, could be useful for a wide variety of applications. These include detecting anomalous clinical events such as seizures in epilepsy, and identifying optimal control spaces for brain machine interfaces. It remains challenging to correctly distinguish nonlinear time-series patterns because of the high intrinsic dimensionality of such data, making accurate inference of state changes (for intervention or control) difficult. Simple distance metrics, which can be computed quickly do not yield accurate classifications. On the other end of the spectrum of classification methods, ensembles of classifiers or deep supervised tools offer higher accuracy but are slow, data-intensive, and computationally expensive to train and deploy. We introduce a reservoir-based tool, state tracker (TRAKR), which offers the high accuracy of ensembles or deep supervised methods while preserving the computational benefits of simple distance metrics. After one-shot training, TRAKR can accurately, and in real time, detect deviations in test patterns. By forcing the weighted dynamics of the reservoir to fit a desired pattern directly, we avoid many rounds of expensive optimization. Then, keeping the output weights frozen, we use the error signal generated by the reservoir in response to a particular test pattern as a classification boundary. We show that using this approach, TRAKR accurately detects changes in synthetic time series. We then compare our tool to several others, showing that it achieves classification performance on par with supervised deep networks on a benchmark dataset–sequential MNIST–, while outperforming all other approaches. When the samples are corrupted by noise, our approach maintains relatively high performance, while supervised deep networks show a sharp decline in performance. We also apply TRAKR to electrocorticography (ECoG) data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding the value of expected outcomes. We show that TRAKR can classify different behaviorally relevant epochs in the neural time series with high accuracy. Altogether, we show that TRAKR is a high performing tool for distinguishing patterns in complex nonlinear time-series data, such as neural recordings. With its high performance, robustness to noise, low trainand inference-time, and ease-of-use, it offers a viable alternative to more complex state-of-the art approaches, particularly for real-time applications.
1 INTRODUCTION
The size and complexity of neural data collected has increased greatly (Marblestone et al. (2013)). Neural data display rich dynamics in the firing patterns of neurons across time, resulting from the recurrently connected circuitry in the brain. As our insight into these dynamics increases through new recording modalities, so does the desire to understand how dynamical patterns change across time and, ultimately, give rise to different behaviors.
A lot of work in computational neuroscience over the past decade has focused on modeling the collective dynamics of a population of neurons in order to gain insight into how firing patterns are related to task variables (Márton et al. (2020); Richards et al. (2019); Yang et al. (2018); Remington et al. (2018); Kell et al. (2018); Zeng et al. (2018); Pandarinath et al. (2018); Durstewitz (2017); Chaisangmongkon et al. (2017); Rajan et al. (2016); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013); Barak et al. (2013); Sussillo & Abbott (2009)). These approaches, however, rely on fitting the whole dynamical system through many rounds of optimization, either indirectly by modeling the task inputs and outputs (Márton et al. (2020); Kell et al. (2018); Chaisangmongkon et al. (2017); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013), or directly by fitting the weights of a neural network to recorded firing patterns (Pandarinath et al. (2018); Durstewitz (2017)). Thus, these approaches can be too time- and computation-intensive for certain applications, e.g. in clinical settings where decisions need to be taken based on recordings in real-time. In these settings, neural time-series patterns need to be accurately distinguished in order to, say, detect the onset of seizures, or distinguish different mental states.
Previous approaches to classifying time series lie on a spectrum from simple distance metrics (e.g., Euclidean) to more computationally intensive approaches such as dynamic time warping (Xing et al. (2010)), ensembles of classifiers (Bagnall et al.) or deep supervised learning (Jeong (2020); Fawaz et al. (2019)). Computing simple distance metrics is fast and straightforward, but does not always yield high accuracy results because the patterns may not be perfectly aligned in time. On the other end of the spectrum, ensembles of classifiers and deep learning-based approaches (Bagnall et al.; Jeong (2020); Fawaz et al. (2019)) have been developed that can offer high accuracy results, but at high computational cost. Dynamic time warping (DTW) has been consistently found to offer good results in practice relative to computational cost (Fawaz et al. (2019); Bagnall et al. (2016); Serrà & Arcos (2014)) and is currently routinely used to measure the similarity of time-series patterns.
Previous work in reservoir computing has shown that networks of neurons can be used as reservoirs of useful dynamics, so called echo-state networks (ESNs), without the need to train recurrent weights through successive rounds of expensive optimization (Vlachas et al. (2020); Pathak et al. (2018); Vincent-Lamarre et al. (2016); Buonomano & Maass (2009); Jaeger & Haas (2004); Jaeger (a;b); Maass et al. (2002)). This suggests reservoir networks could offer a computationally cheaper alternative to deep supervised approaches in the classification of neural time-series data. However, the training of reservoir networks has been found to be more unstable compared to methods that also adjust the recurrent connections (e.g., via backpropagation through time, BPTT) in the case of reduced-order data (Vlachas et al. (2020)). Even though ESNs have been shown to yield good results when fine-tuned (Tanisaro & Heidemann (2016); Aswolinskiy et al. (2016)), convergence represents a significant problem when training ESNs end-to-end to perform classification on complex time-series datasets, and is a hurdle to their wider adoption.
Here, we propose fitting the reservoir output weights to a single time series - thus avoiding many rounds of training that increase training time and could potentially cause instabilities. We use the error generated through the output unit in response to a particular test pattern as input to a classifier. We show that using this approach, we obtain high accuracy results on a benchmark dataset - sequential MNIST - outperforming other approaches such as simple distance metrics (e.g., based on Euclidean distance or Mutual Information) and a previous approach based on echo-state networks, while performing on par with deep supervised networks. We also show that our approach is more robust to noise than other approaches, in particular deep supervised networks. At the same time, TRAKR achieves high performance while keeping training and inference time low.
We also apply our tool, TRAKR, to neural data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding expectations, and inducing changes in behavior during unexpected outcomes (Rich & Wallis (2016); Rudebeck & Rich (2018); Jones et al. (2012); Wallis (2012); Schoenbaum (2009); Burke et al. (2009); Wallis & Miller (2003); Schoenbaum et al. (1998)). This data consists of 128-channel electrocorticography (micro-ECOG) recordings obtained from the macaque OFC, including anterior and posterior areas 11 and 13, during a reward expectation task. The task was designed to understand if and how expectations encoded in OFC are updated by unexpected outcomes. TRAKR is able to distinguish three different behaviorally relevant epochs based on the neural time-series with high accuracy, revealing that OFC units differentiate between different task episodes. This shows it can be used as a reliable tool to gain further insight into the information encoded in neural circuits.
Taken together, we make the following contributions: - We show that by fitting single time series and working with the error signal reservoirs can perform on par with supervised deep networks in time series classification - We show that this approach, while easy to use, outperforms other commonly used approaches, such as Dynamic Time Warping (DTW), and other distance measures embraced for their simplicity - We show that our approach is more robust to noise than other approaches - Our approach is computationally less expensive than other approaches, achieving high performance at low train- and inference- time - It is able to detect differences between neural patterns with high accuracy
Altogether, this shows that TRAKR is a viable alternative to state-of-the-art approaches for time series classification.
2 METHODS
2.1 MODEL DETAILS
TRAKR (Figure 1A) is a reservoir-based recurrent neural network (RNN) with N recurrently connected neurons. Recurrent weights, J , are initialized randomly and remain aplastic over time (Buonomano & Maass (2009); Jaeger (b); Maass et al. (2002)). The readout unit, zout, is connected to the reservoir through a set of output weights, wout, which are plastic and are adjusted during training. The reservoir also receives an input signal, I(t), through an aplastic set of weights win.
The network is governed by the following equations:
τ dxi(t)
dt = −xi(t) + g N∑ j=1 Jijϕj(t) + wi,inI(t) (1)
zout(t) = ∑ i wout,i(t)xi(t) (2)
Here, xi(t) is the activity of a single neuron in the reservoir, τ is the integration time constant, g is the gain setting the scale for the recurrent weights, and J is the recurrent weight matrix of the
reservoir. The term g N∑ j=1 Jijϕj(t) denotes the strength of input to a particular neuron from other neurons in the reservoir and I(t) is the input signal (Equation 1). zout(t) denotes the activity of the readout unit together with the output weights, wout (Equation 2). In our notation, wij denote the weight from neuron j to i, and so wout,i means the weight from ith unit in the reservoir to the readout unit. ϕ is the activation function given by:
ϕi(t) = tanh(xi(t)) (3)
We use recursive least squares (RLS) to adjust the output weights, wout during training (Haykin, Simon S. (1996)). The algorithm and the update rules are given by:
∆wout,i(t) = −η(zout(t)− f(t)) ∑ j Pij(t)ϕj(t) (4)
wout,i(t) = wout,i(t− 1) + ∆wout,i(t) (5)
Here, η is the learning rate, f(t) is the target function, and the term ∑
j Pij(t)ϕj(t) acts as a regularizer where P is the inverse cross-correlation matrix of the network firing rates. For details on setting hyperparameters, see Appendix A.
2.2 ADJUSTING RESERVOIR DYNAMICS
During training, the output weights, wout, are optimized using RLS based on the instantaneous difference between the output, zout(t), and the target function, f(t). This optimization is performed in one shot (without the need for multiple optimization rounds). Here, we use the reservoir to autoencode the input signal, thus f(t) = I(t). The instantaneous difference gives rise to an error term, E(t), calculated as:
E(t) = N∑ i=1 ∆wout,i(t) (6)
2.3 OBTAINING THE ERROR SIGNAL
After training, the output weights, wout are frozen. The test pattern is fed to the network via the input, I(t), and the network is iterated to obtain the error, E(t) over the duration of the test signal. The error, E(t) is computed as the difference between the test signal and the network output (Equation 6). The error may vary depending on the similarity of a given test signal to the learned time series. The error is used as input to a classifier.
2.4 CLASSIFICATION OF THE ERROR SIGNAL
The error, E(t), is used as input to a support vector machine (SVM) classifier with a Gaussian radial basis function (rbf) kernel. The classifier is trained using 10-fold stratified cross-validation. The same classifier and training procedure was used in comparing the different approaches. Naive Bayes as well as the neural network-based approaches (multilayer perceptron (MLP) and time warping invariant echo state network (ESN)) are directly used as classifiers, again with 10-fold stratified cross-validation. Accuracy and area under the curve (AUC) are computed as a measure of classification performance. MacBook Pro CPU was used for all the comparisons.
2.5 NEURAL RECORDINGS
2.5.1 TASK DESIGN
Neural recordings were obtained from the macaque OFC using a custom designed 128-channel micro-ECOG array (NeuroNexus), with coverage including anterior and posterior subregions (areas 11/13). During preliminary training, the monkey learned to associate unique stimuli (natural images) with rewards of different values. Rewards were small volumes of sucrose or quinine solutions, and values were manipulated by varying their respective concentrations.
The behavioral task design is shown in (Figure 4A). During the task, the monkey initiated a trial by contacting a touch-sensitive bar and holding gaze on a central location. On each trial, either one or two images were presented, and the monkey selected one by shifting gaze to it and releasing the bar. At this point, a small amount of fluid was delivered, and then a neutral cue appeared (identical across all trials) indicating the start of a 5s response period where the macaque could touch the bar to briefly activate the fluid pump. By generating repeated responses, it could collect as much of the available reward as desired. There were two types of these trials. Match (mismatch) trials were those where the initial image accurately (did not accurately) signal the type of reward delivered on that trial. Behavioral performance and neural time series were recorded in 11 task sessions across 35 days. For further details on trials and data preprocessing, see Appendix B.
3 RESULTS
3.1 DETECTING PATTERN CHANGES IN SYNTHETIC TIME SERIES
Wout frozenWout frozen
Wout frozenWout frozen
Wout plasticWout plastic
Tr ai
n
Tr ai
n
BA
First, we trained TRAKR on idealized, synthetic signals using sin-functions of two different frequencies (Figure 2). Reservoir output weights were fitted to the signal using recursive least squares (RLS; see subsection 2.1). In Figure 2A, the network was trained on the first half of the signal (blue) while the output weights, wout, remained frozen during the second half. Then with wout frozen, a test signal (orange) was fed to the reservoir. The network output, zout(t), in red and the error signal, E(t), in green are depicted in Figure 2. The network correctly detects the deviation of the test pattern (orange, 1st half of the signal) from the learned pattern (blue, 1st half of the signal), which results in an increase in the error signal (green, 1st half of the signal, Figure 2A). The second half of the test signal (orange) aligns with the trained signal (blue, 1st half) and thus yields no error (green, 2nd half). In Figure 2B, the order of the training procedure was reversed in that output weights remained frozen for the first half of the signal (blue) and were plastic during the second half. As expected,
the increase in the error signal (green) now occurs during the second half of the test signal (orange). Thus, TRAKR correctly detects, via the error signal E(t), when a new frequency pattern occurs in the test signal that deviates from the trained pattern.
3.2 CLASSIFYING DIGITS - SEQUENTIAL MNIST
We then applied TRAKR to the problem of classifying the ten digits from sequential MNIST, a benchmark dataset for time-series problems (Le et al. (2015); Kerg et al. (2019)).
For training, we used the entire dataset of 1000 sequential MNIST digits including 100 samples for each digit (0-9). We fed each sequential digit (28 x 28 pixel image flattened into a vector of length 784) as a one-shot training signal to TRAKR. Reservoir output weights were again fitted to a single digit using recursive least squares (RLS; see subsection 2.1). After fitting TRAKR to one time series corresponding to one of the samples of a particular digit, we froze the output weights and fed all the other digits as test samples to TRAKR. We obtained an error signal, E(t), from every test sample, with the magnitude of the error varying depending on the similarity with the learned digit. The error signal was then fed into a classifier which was trained to differentiate the digits based on the error terms (see subsection 2.4 for more details). We repeated this procedure for all digits and samples in the dataset to obtain the averaged classification performance for TRAKR (Figure 3A).
We found that TRAKR achieves a performance of AUC = 99% on this dataset (Figure 3A). We compared our approach with other commonly used methods for the classification of time series. We compared our results against supervised deep neural networks (MLP; as in Fawaz et al. (2019)), a recent echo-state network based approach (twiESN; as in Fawaz et al. (2019)), distance measures such as Dynamic Time Warping (DTW; Rakthanmanon et al. (2012)), Euclidean distance Euc), Mutual information (MI), and a naive Bayes classifier (NB). With the exception of MLPs, which showed performance on par with TRAKR, we found that all other approaches performed significantly worse than TRAKR (p < 0.001; see subsection 2.4 for further details).
We also tested the performance of TRAKR under different noise levels added to training digits (Figure 3B, Appendix C). We again compared TRAKR against all the other approaches and found TRAKR to perform the best, particularly at higher noise levels: performance decays gradually as the noise is increased and is at AUC = 75% even at higher noise levels (σ = 1). In particular, we found TRAKR performs better than MLPs at higher noise levels.
We also measured training and inference time, comparing TRAKR to the other approaches introduced above (Table 1). We found that training time for TRAKR is situated at the lower end of the spectrum, slower than it takes to obtain a measure of the distance between two traces using DTW, MI or Euc, but significantly faster than it takes to train the MLP or twiESN. While it does require upfront fitting, our approach has the advantage that it does not require multiple rounds of optimization (like MLP or twiESN) because the signal is fit in one shot (see subsection 2.2 for details). This yields relatively fast training time.
After fitting, TRAKR can detect deviations from the learned signal in real-time. The inference time of our approach again lies in between computationally relatively more expensive approaches such as MLP and twiESN and less intensive approaches such as DTW, MI and Euc. We also measured the train and inference time of the classifier we use (SVM) and found that it does not significantly impact training or inference time; other, relatively more simple classifiers such as kNN (k-Nearest Neighbors) may also be used instead, for further gains in speed. We compared all of these approaches under the same conditions (see subsection 2.4). Also, see Appendix D for details on calculation of training and inference time.
3.3 PERFORMANCE ON NEURAL TIME SERIES RECORDED FROM THE MACAQUE OFC
The OFC is involved in encoding and updating affective expectations. We used a behavioral task designed to study the neural mechanisms of how such expectations are encoded in the OFC of primates and how they may guide behavior under different conditions. The goal here was to determine whether TRAKR can be used to reliably distinguish complex neural time series patterns (Figure 4A; see also subsubsection 2.5.1 for more details).
A recording from a single electrode is shown together with three different behaviorally relevant epochs (rest, choice and reward periods; Figure 4B). We compared the performance of TRAKR with all other previously introduced methods on the task of trying to distinguish the neural time series patterns in these three epochs. It is important to note that we did not know a priori whether there is enough information encoded in the OFC recordings to allow for differentiating the three epochs. Thus, a high performing classifier can offer clues here as to what type of information is encoded in the brain.
We trained TRAKR on the neural time series corresponding to rest period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). The error signal was used as input to a classifier. We repeated this procedure for all trials in the dataset to obtain the averaged classification performance. We also calculated the Fast Fourier transform (FFT) of the signals and obtained the magnitude (power) in the α (0 − 12Hz), β (13 − 35Hz), and γ (36 − 80Hz) bands within the 3 epochs. We compared TRAKR against this FFT based classification too.
Again, with the exception of MLP which performed on par with TRAKR, we found that TRAKR outperformed all the other methods (AUC = 94%; p < 0.001; Figure 4C). TRAKR was able to distinguish the three epochs with high accuracy based on the neural signal, showing that there is enough information in the OFC to differentiate these patterns. It is important to note that based on the performance of lower performing approaches, such as DTW or Euc, one might have (wrongly) concluded that the OFC lacks enough information to solve this task. Overall, with its high per-
formance, TRAKR can be used as a reliable tool to differentiate time series patterns and generate hypotheses on the type of information represented in neural circuits.
We also investigated whether any of the methods could distinguish between match and mismatch trials based on the OFC signals (Figure 4D). For this purpose, we trained TRAKR on the neural time series corresponding to choice period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). We found that all methods performed at chance-level, indicating there is not enough information in the OFC recordings to differentiate the two. We also observed that the classification performance for the three epochs degrades over days (Figure 4E; blue & red solid lines), while that for match/mismatch trials consistently stays around chance-level (Figure 4E; blue & red dotted lines).
Lastly, we visualized different electrodes in the space spanned by the first three principal components of the reservoir’s activations (Appendix E). We fitted the reservoir to signal obtained from a particular electrode, froze the output weights, and projected other electrodes onto the first three principal components of reservoir activity. We found that electrodes trace out different paths in reservoir space. Thus, as a proof-of-concept, projections onto reservoir space can be used to visually inspect
the similarity of recordings from different electrodes, and identify differences that may be indicative of functionally meaningful sub-groupings that represent functionally coherent modules in the brain.
4 DISCUSSION
We have shown that TRAKR can distinguish time series patterns with high accuracy. TRAKR outperforms other approaches in classifying time-series data on a benchmark dataset, sequential MNIST, and in differentiating neural time series signals obtained from recordings in the macaque OFC. It performs on par with supervised neural networks (MLPs), while outperforming other types of echo state networks (twiESN) and commonly used distance measures such as Dynamic Time Warping (DTW). Meanwhile, we found that our approach is more robust to noise than other approaches, in particular supervised neural networks, and offers good performance in terms of training and inference time.
We found that TRAKR offers high accuracy at relatively lower training and inference times than other approaches with comparable accuracy such as supervised multi-layer neural networks. While TRAKR relies on a classifier on top of the error traces to perform the classification task, other classifiers such as kNN may be used to maximize training and inference speed, instead of the SVM classifier employed here. All approaches were compared in the same environment, but all neural network-based approaches may equally benefit from further gains in training and inference speed by optimising the code for deployment on GPUs. For this purpose, TRAKR can readily be implemented in JAX, a high-performance computing framework (Bradbury et al. (2018).
Meanwhile, other approaches based on echo-state networks, such as twiESN (Fawaz et al. (2019)), performed worse than TRAKR and showed higher train and inference times. This suggests our contribution is key to making reservoir networks a viable alternative to state-of-the-art approaches in time series classification.
While TRAKR and most other methods could distinguish three behaviorally relevant epochs based on the OFC signal, none of the methods were able to accurately distinguish match and mismatch trials. This indicates that there is enough information in the OFC to distinguish the three task periods, but not enough to differentiate match and mismatch trials. It is possible that receiving a better or worse reward than expected affected the neural signal in distinct/opposite ways, such that the effect was cancelled out on average. It is also possible that the difference in neural time-series patterns was only discernible if the reward was maximally different (much better or worse than expected). In the current task design, there were 4 different levels of reward (flavors) that the macaque associated with different pictures (subsubsection 2.5.1). The number of trials in which obtained was maximally different from expected reward was low and possibly not sufficient for accurate classification. Another possibility, supported by our results and corroborated by several studies (Stalnaker et al. (2018); McDannald et al. (2014); Takahashi et al. (2013); Kennerley et al. (2011)), is that OFC neural activity signals reward values but not reward prediction errors, which instead are mediated through the ventral tegmental area (VTA) in the midbrain.
We found that the classification performance decreased over recording sessions. This could mean that the difference between task epochs being classified decreased because of increased familiarity with the task. That is less likely, however, because the subject was well-trained prior to recordings. Instead, since the signal was recorded over a period of 35 days, the decrease in the classification performance could be a result of degrading signal quality, perhaps due to electrode impedance issues (Kozai et al. (2015a;b); Holson et al. (1998); Robinson & Camp (1991)).
5 CONCLUSION
There is a need for and strong interest in tools for the analysis of time-series data (Bhatnagar et al. (2021)). We show that TRAKR is a fast, accurate and robust tool for the classification of time-series patterns. Through its ease of use and low training and inference time, it is particularly suited for realtime applications where accurate decisions need to be made quickly and signal degradation or other artifacts necessitate frequent re-calibration, such as in clinical settings. TRAKR can also be used to distinguish neural time series patterns in the brain, shedding light on the information encoded in neural circuits and thus generating hypotheses for new experiments.
A TRAKR HYPERPARAMETERS
The recurrent weights Jij are weights from unit j to i. The recurrent weights are initially chosen independently and randomly from a Gaussian distribution with mean of 0 and variance given by g2/N . The input weights win are also chosen independently and randomly from the standard normal distribution.
An integration time constant τ = 1ms is used. We use gain g = 1.2 for all the networks.
The matrix P is not explicitly calculated but updated as follows:
P (t) = P (t− 1)− P (t− 1)ϕ(t)ϕ ′(t)P (t− 1)
1 + ϕ′(t)P (t− 1)ϕ(t)
The learning rate η is given by 1
1 + ϕ′(t)P (t)ϕ(t) .
The number of units used in the reservoir is generally N = 30.
B MACAQUE TRIAL DETAILS & DATA PRE-PROCESSING
Each trial was approximately 6.5s long, including different behaviorally relevant epochs and cues. The macaque performed approximately 550 trials within each task session (mean± sd : 562± 72). Of note, 80% of the trials were match trials within each task session.
ECoG data were acquired by a neural processing system (Ripple) at 30kHz and then resampled at 1kHz. The 128-channel data were first z-score normalized. Second-order butterworth bandstop IIR filters were used to remove 60Hz line noise and harmonics from the signal. We also used second-order Savitzky-Golay filters of window length 99 to smooth the data and remove very high frequency juice pump artifacts (> 150Hz). For most of the analysis here, we used the average of the 128-channel time series as an input to TRAKR.
C DETAILS ON SEQUENTIAL MNIST CLASSIFICATION
For measuring noise robustness, we added random independent Gaussian noise to the training digits (µ = 0 and varying standard deviation (σ)). The actual noise that was added (noise levels as depicted in Figure 3B) can be calculated as σ ∗ 255, with σ ∈ [0, 1]. The number 255 represents the maximal pixel value in the sequential digits.
D CALCULATION OF TRAINING AND INFERENCE TIME
The calculation of training time for TRAKR includes the time taken to train one sequence through TRAKR along with 10-fold cross validation time using SVM. For all the other methods, it is also the time taken to train using 10-fold cross validation. For MLP, the training is done using Keras which is optimized, whereas TRAKR has further room for optimization by implementing it in JAX etc.
The calculation of inference time for TRAKR includes the time taken to feed one test sequence into TRAKR and making predictions on a test set using SVM. For all the other methods, the time includes making predictions on the same test set.
E VISUALIZATION OF RESERVOIR ACTIVATIONS
Single electrode recordings were projected into the space spanned by the first three principal components of reservoir activations. The four electrodes trace out different trajectories in reservoir space, suggesting they capture potentially different neural dynamics, as shown in the figure below.
PC1 PC2
PC3
Electrode 1 Electrode 13
Electrode 76 Electrode 127 | 1. What is the focus and contribution of the paper regarding neural time-series classification?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental design and comparisons with other methods?
3. Do you have any concerns or suggestions regarding the paper's methodology, such as the use of a frozen section in the training set or the choice of dataset for time-series classification?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions regarding the computation time required for training and prediction, or the interpretation of the results? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a new method to classify neural time-series. The method is based on a recurrent reservoir, which includes a black box function (here a fixed random RNN) and a linear readout. The black box function produces an state vector at each time point of an input time-series. This state vector is then transformed with a linear readout, trained to reconstruct the input signal. In other words, the reservoir is an auto-encoder with a fixed random RNN encoder and a linear decoder trained to minimize the reconstruction error. The method then uses the reconstruction error of the reservoir as a representation of the signal, to be used in a RBF-kernel SVM classifier and perform a given task.
On a simple toy example, the authors show that the reservoir can learn to reconstruct a sinusoid signal, and that it fails to reconstruct a sinusoid with a frequency different than during training. This failure can be considered useful to identify regimes different that the ones seen during training. The method is then used on a small MNIST datasest, training the reservoir on one sample, and then training an SVM classifier on all samples based on the reconstruction error of the reservoir. The method is then compared with four baseline methods. Finally, the method is applied to neural recordings of a macaque OFC, training the reservoir on one epoch, and then training a classifier to predict behavioral states.
Review
Strengths: The method is reasonably presented. The method seems novel, and valuable because it requires very little training to create a representation of the input that can be used for further tasks.
Weaknesses: The experiments proposed in the paper do not seem sufficient to assess the strengths of the proposed method. The proposed method is not compared against other reservoir-based methods, against random-projection methods, or against deep neural networks.
Sinusoid toy dataset:
The frozen section of the train set is confusing. Why include this section in the train set if the learned weights are frozen ? Wouldn't it be clearer to remove the frozen section of the training set ?
MNIST experiment:
MNIST is composed on images, which are not naturally represented as sequential time-series. Why use MNIST as a dataset for time-series classification ?
Why is the comparison limited to the four baseline methods? Why not include other models known to be good at classifying MNIST (such as CNNs)? Why not include other methods based on reservoir computing ? Why not include other random-projection methods ?
How were these baseline methods applied? Were the distance metrics used as input of the RBF kernel? How is Naive Bayes used in conjunction with the SVM ?
Is the classification task a one-vs-one, one-vs-all, or multinomial task? If multinomial, how is the AUC computed ?
How are the error bars computed in Figure 3A ? How is the shaded area computed in Figure 3B ? Assuming this is some sort of standard deviation around a mean value, why does the mean AUC for TRAKR fluctuates more than the standard deviation (e.g. trough at noise level = 0.7)?
How much time does the method requires to (a) train the reservoir, (b) train the SVM, (c) predict a sample ? Which computational time is reported in Table 1?
The OFC experiment raises the same questions as the MNIST experiment in terms of choice of the baseline methods. Additionally, it is not clear how the task of predicting behavioral states is useful to understand how the brain works. The TRAKR method does not seem to give particularly interpretable representations. The TRAKR+PCA embedding presented in Figure 5 also lacks a comparison with other embeddings, such as PCA directly on the electrode signals. |
ICLR | Title
TRAKR – A reservoir-based tool for fast and accurate classification of neural time-series patterns
Abstract
Neuroscience has seen a dramatic increase in the types of recording modalities and complexity of neural time-series data collected from them. The brain is a highly recurrent system producing rich, complex dynamics that result in different behaviors. Correctly distinguishing such nonlinear neural time series in real-time, especially those with non-obvious links to behavior, could be useful for a wide variety of applications. These include detecting anomalous clinical events such as seizures in epilepsy, and identifying optimal control spaces for brain machine interfaces. It remains challenging to correctly distinguish nonlinear time-series patterns because of the high intrinsic dimensionality of such data, making accurate inference of state changes (for intervention or control) difficult. Simple distance metrics, which can be computed quickly do not yield accurate classifications. On the other end of the spectrum of classification methods, ensembles of classifiers or deep supervised tools offer higher accuracy but are slow, data-intensive, and computationally expensive to train and deploy. We introduce a reservoir-based tool, state tracker (TRAKR), which offers the high accuracy of ensembles or deep supervised methods while preserving the computational benefits of simple distance metrics. After one-shot training, TRAKR can accurately, and in real time, detect deviations in test patterns. By forcing the weighted dynamics of the reservoir to fit a desired pattern directly, we avoid many rounds of expensive optimization. Then, keeping the output weights frozen, we use the error signal generated by the reservoir in response to a particular test pattern as a classification boundary. We show that using this approach, TRAKR accurately detects changes in synthetic time series. We then compare our tool to several others, showing that it achieves classification performance on par with supervised deep networks on a benchmark dataset–sequential MNIST–, while outperforming all other approaches. When the samples are corrupted by noise, our approach maintains relatively high performance, while supervised deep networks show a sharp decline in performance. We also apply TRAKR to electrocorticography (ECoG) data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding the value of expected outcomes. We show that TRAKR can classify different behaviorally relevant epochs in the neural time series with high accuracy. Altogether, we show that TRAKR is a high performing tool for distinguishing patterns in complex nonlinear time-series data, such as neural recordings. With its high performance, robustness to noise, low trainand inference-time, and ease-of-use, it offers a viable alternative to more complex state-of-the art approaches, particularly for real-time applications.
1 INTRODUCTION
The size and complexity of neural data collected has increased greatly (Marblestone et al. (2013)). Neural data display rich dynamics in the firing patterns of neurons across time, resulting from the recurrently connected circuitry in the brain. As our insight into these dynamics increases through new recording modalities, so does the desire to understand how dynamical patterns change across time and, ultimately, give rise to different behaviors.
A lot of work in computational neuroscience over the past decade has focused on modeling the collective dynamics of a population of neurons in order to gain insight into how firing patterns are related to task variables (Márton et al. (2020); Richards et al. (2019); Yang et al. (2018); Remington et al. (2018); Kell et al. (2018); Zeng et al. (2018); Pandarinath et al. (2018); Durstewitz (2017); Chaisangmongkon et al. (2017); Rajan et al. (2016); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013); Barak et al. (2013); Sussillo & Abbott (2009)). These approaches, however, rely on fitting the whole dynamical system through many rounds of optimization, either indirectly by modeling the task inputs and outputs (Márton et al. (2020); Kell et al. (2018); Chaisangmongkon et al. (2017); Sussillo et al. (2015); Mante et al. (2013); Sussillo & Barak (2013), or directly by fitting the weights of a neural network to recorded firing patterns (Pandarinath et al. (2018); Durstewitz (2017)). Thus, these approaches can be too time- and computation-intensive for certain applications, e.g. in clinical settings where decisions need to be taken based on recordings in real-time. In these settings, neural time-series patterns need to be accurately distinguished in order to, say, detect the onset of seizures, or distinguish different mental states.
Previous approaches to classifying time series lie on a spectrum from simple distance metrics (e.g., Euclidean) to more computationally intensive approaches such as dynamic time warping (Xing et al. (2010)), ensembles of classifiers (Bagnall et al.) or deep supervised learning (Jeong (2020); Fawaz et al. (2019)). Computing simple distance metrics is fast and straightforward, but does not always yield high accuracy results because the patterns may not be perfectly aligned in time. On the other end of the spectrum, ensembles of classifiers and deep learning-based approaches (Bagnall et al.; Jeong (2020); Fawaz et al. (2019)) have been developed that can offer high accuracy results, but at high computational cost. Dynamic time warping (DTW) has been consistently found to offer good results in practice relative to computational cost (Fawaz et al. (2019); Bagnall et al. (2016); Serrà & Arcos (2014)) and is currently routinely used to measure the similarity of time-series patterns.
Previous work in reservoir computing has shown that networks of neurons can be used as reservoirs of useful dynamics, so called echo-state networks (ESNs), without the need to train recurrent weights through successive rounds of expensive optimization (Vlachas et al. (2020); Pathak et al. (2018); Vincent-Lamarre et al. (2016); Buonomano & Maass (2009); Jaeger & Haas (2004); Jaeger (a;b); Maass et al. (2002)). This suggests reservoir networks could offer a computationally cheaper alternative to deep supervised approaches in the classification of neural time-series data. However, the training of reservoir networks has been found to be more unstable compared to methods that also adjust the recurrent connections (e.g., via backpropagation through time, BPTT) in the case of reduced-order data (Vlachas et al. (2020)). Even though ESNs have been shown to yield good results when fine-tuned (Tanisaro & Heidemann (2016); Aswolinskiy et al. (2016)), convergence represents a significant problem when training ESNs end-to-end to perform classification on complex time-series datasets, and is a hurdle to their wider adoption.
Here, we propose fitting the reservoir output weights to a single time series - thus avoiding many rounds of training that increase training time and could potentially cause instabilities. We use the error generated through the output unit in response to a particular test pattern as input to a classifier. We show that using this approach, we obtain high accuracy results on a benchmark dataset - sequential MNIST - outperforming other approaches such as simple distance metrics (e.g., based on Euclidean distance or Mutual Information) and a previous approach based on echo-state networks, while performing on par with deep supervised networks. We also show that our approach is more robust to noise than other approaches, in particular deep supervised networks. At the same time, TRAKR achieves high performance while keeping training and inference time low.
We also apply our tool, TRAKR, to neural data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding expectations, and inducing changes in behavior during unexpected outcomes (Rich & Wallis (2016); Rudebeck & Rich (2018); Jones et al. (2012); Wallis (2012); Schoenbaum (2009); Burke et al. (2009); Wallis & Miller (2003); Schoenbaum et al. (1998)). This data consists of 128-channel electrocorticography (micro-ECOG) recordings obtained from the macaque OFC, including anterior and posterior areas 11 and 13, during a reward expectation task. The task was designed to understand if and how expectations encoded in OFC are updated by unexpected outcomes. TRAKR is able to distinguish three different behaviorally relevant epochs based on the neural time-series with high accuracy, revealing that OFC units differentiate between different task episodes. This shows it can be used as a reliable tool to gain further insight into the information encoded in neural circuits.
Taken together, we make the following contributions: - We show that by fitting single time series and working with the error signal reservoirs can perform on par with supervised deep networks in time series classification - We show that this approach, while easy to use, outperforms other commonly used approaches, such as Dynamic Time Warping (DTW), and other distance measures embraced for their simplicity - We show that our approach is more robust to noise than other approaches - Our approach is computationally less expensive than other approaches, achieving high performance at low train- and inference- time - It is able to detect differences between neural patterns with high accuracy
Altogether, this shows that TRAKR is a viable alternative to state-of-the-art approaches for time series classification.
2 METHODS
2.1 MODEL DETAILS
TRAKR (Figure 1A) is a reservoir-based recurrent neural network (RNN) with N recurrently connected neurons. Recurrent weights, J , are initialized randomly and remain aplastic over time (Buonomano & Maass (2009); Jaeger (b); Maass et al. (2002)). The readout unit, zout, is connected to the reservoir through a set of output weights, wout, which are plastic and are adjusted during training. The reservoir also receives an input signal, I(t), through an aplastic set of weights win.
The network is governed by the following equations:
τ dxi(t)
dt = −xi(t) + g N∑ j=1 Jijϕj(t) + wi,inI(t) (1)
zout(t) = ∑ i wout,i(t)xi(t) (2)
Here, xi(t) is the activity of a single neuron in the reservoir, τ is the integration time constant, g is the gain setting the scale for the recurrent weights, and J is the recurrent weight matrix of the
reservoir. The term g N∑ j=1 Jijϕj(t) denotes the strength of input to a particular neuron from other neurons in the reservoir and I(t) is the input signal (Equation 1). zout(t) denotes the activity of the readout unit together with the output weights, wout (Equation 2). In our notation, wij denote the weight from neuron j to i, and so wout,i means the weight from ith unit in the reservoir to the readout unit. ϕ is the activation function given by:
ϕi(t) = tanh(xi(t)) (3)
We use recursive least squares (RLS) to adjust the output weights, wout during training (Haykin, Simon S. (1996)). The algorithm and the update rules are given by:
∆wout,i(t) = −η(zout(t)− f(t)) ∑ j Pij(t)ϕj(t) (4)
wout,i(t) = wout,i(t− 1) + ∆wout,i(t) (5)
Here, η is the learning rate, f(t) is the target function, and the term ∑
j Pij(t)ϕj(t) acts as a regularizer where P is the inverse cross-correlation matrix of the network firing rates. For details on setting hyperparameters, see Appendix A.
2.2 ADJUSTING RESERVOIR DYNAMICS
During training, the output weights, wout, are optimized using RLS based on the instantaneous difference between the output, zout(t), and the target function, f(t). This optimization is performed in one shot (without the need for multiple optimization rounds). Here, we use the reservoir to autoencode the input signal, thus f(t) = I(t). The instantaneous difference gives rise to an error term, E(t), calculated as:
E(t) = N∑ i=1 ∆wout,i(t) (6)
2.3 OBTAINING THE ERROR SIGNAL
After training, the output weights, wout are frozen. The test pattern is fed to the network via the input, I(t), and the network is iterated to obtain the error, E(t) over the duration of the test signal. The error, E(t) is computed as the difference between the test signal and the network output (Equation 6). The error may vary depending on the similarity of a given test signal to the learned time series. The error is used as input to a classifier.
2.4 CLASSIFICATION OF THE ERROR SIGNAL
The error, E(t), is used as input to a support vector machine (SVM) classifier with a Gaussian radial basis function (rbf) kernel. The classifier is trained using 10-fold stratified cross-validation. The same classifier and training procedure was used in comparing the different approaches. Naive Bayes as well as the neural network-based approaches (multilayer perceptron (MLP) and time warping invariant echo state network (ESN)) are directly used as classifiers, again with 10-fold stratified cross-validation. Accuracy and area under the curve (AUC) are computed as a measure of classification performance. MacBook Pro CPU was used for all the comparisons.
2.5 NEURAL RECORDINGS
2.5.1 TASK DESIGN
Neural recordings were obtained from the macaque OFC using a custom designed 128-channel micro-ECOG array (NeuroNexus), with coverage including anterior and posterior subregions (areas 11/13). During preliminary training, the monkey learned to associate unique stimuli (natural images) with rewards of different values. Rewards were small volumes of sucrose or quinine solutions, and values were manipulated by varying their respective concentrations.
The behavioral task design is shown in (Figure 4A). During the task, the monkey initiated a trial by contacting a touch-sensitive bar and holding gaze on a central location. On each trial, either one or two images were presented, and the monkey selected one by shifting gaze to it and releasing the bar. At this point, a small amount of fluid was delivered, and then a neutral cue appeared (identical across all trials) indicating the start of a 5s response period where the macaque could touch the bar to briefly activate the fluid pump. By generating repeated responses, it could collect as much of the available reward as desired. There were two types of these trials. Match (mismatch) trials were those where the initial image accurately (did not accurately) signal the type of reward delivered on that trial. Behavioral performance and neural time series were recorded in 11 task sessions across 35 days. For further details on trials and data preprocessing, see Appendix B.
3 RESULTS
3.1 DETECTING PATTERN CHANGES IN SYNTHETIC TIME SERIES
Wout frozenWout frozen
Wout frozenWout frozen
Wout plasticWout plastic
Tr ai
n
Tr ai
n
BA
First, we trained TRAKR on idealized, synthetic signals using sin-functions of two different frequencies (Figure 2). Reservoir output weights were fitted to the signal using recursive least squares (RLS; see subsection 2.1). In Figure 2A, the network was trained on the first half of the signal (blue) while the output weights, wout, remained frozen during the second half. Then with wout frozen, a test signal (orange) was fed to the reservoir. The network output, zout(t), in red and the error signal, E(t), in green are depicted in Figure 2. The network correctly detects the deviation of the test pattern (orange, 1st half of the signal) from the learned pattern (blue, 1st half of the signal), which results in an increase in the error signal (green, 1st half of the signal, Figure 2A). The second half of the test signal (orange) aligns with the trained signal (blue, 1st half) and thus yields no error (green, 2nd half). In Figure 2B, the order of the training procedure was reversed in that output weights remained frozen for the first half of the signal (blue) and were plastic during the second half. As expected,
the increase in the error signal (green) now occurs during the second half of the test signal (orange). Thus, TRAKR correctly detects, via the error signal E(t), when a new frequency pattern occurs in the test signal that deviates from the trained pattern.
3.2 CLASSIFYING DIGITS - SEQUENTIAL MNIST
We then applied TRAKR to the problem of classifying the ten digits from sequential MNIST, a benchmark dataset for time-series problems (Le et al. (2015); Kerg et al. (2019)).
For training, we used the entire dataset of 1000 sequential MNIST digits including 100 samples for each digit (0-9). We fed each sequential digit (28 x 28 pixel image flattened into a vector of length 784) as a one-shot training signal to TRAKR. Reservoir output weights were again fitted to a single digit using recursive least squares (RLS; see subsection 2.1). After fitting TRAKR to one time series corresponding to one of the samples of a particular digit, we froze the output weights and fed all the other digits as test samples to TRAKR. We obtained an error signal, E(t), from every test sample, with the magnitude of the error varying depending on the similarity with the learned digit. The error signal was then fed into a classifier which was trained to differentiate the digits based on the error terms (see subsection 2.4 for more details). We repeated this procedure for all digits and samples in the dataset to obtain the averaged classification performance for TRAKR (Figure 3A).
We found that TRAKR achieves a performance of AUC = 99% on this dataset (Figure 3A). We compared our approach with other commonly used methods for the classification of time series. We compared our results against supervised deep neural networks (MLP; as in Fawaz et al. (2019)), a recent echo-state network based approach (twiESN; as in Fawaz et al. (2019)), distance measures such as Dynamic Time Warping (DTW; Rakthanmanon et al. (2012)), Euclidean distance Euc), Mutual information (MI), and a naive Bayes classifier (NB). With the exception of MLPs, which showed performance on par with TRAKR, we found that all other approaches performed significantly worse than TRAKR (p < 0.001; see subsection 2.4 for further details).
We also tested the performance of TRAKR under different noise levels added to training digits (Figure 3B, Appendix C). We again compared TRAKR against all the other approaches and found TRAKR to perform the best, particularly at higher noise levels: performance decays gradually as the noise is increased and is at AUC = 75% even at higher noise levels (σ = 1). In particular, we found TRAKR performs better than MLPs at higher noise levels.
We also measured training and inference time, comparing TRAKR to the other approaches introduced above (Table 1). We found that training time for TRAKR is situated at the lower end of the spectrum, slower than it takes to obtain a measure of the distance between two traces using DTW, MI or Euc, but significantly faster than it takes to train the MLP or twiESN. While it does require upfront fitting, our approach has the advantage that it does not require multiple rounds of optimization (like MLP or twiESN) because the signal is fit in one shot (see subsection 2.2 for details). This yields relatively fast training time.
After fitting, TRAKR can detect deviations from the learned signal in real-time. The inference time of our approach again lies in between computationally relatively more expensive approaches such as MLP and twiESN and less intensive approaches such as DTW, MI and Euc. We also measured the train and inference time of the classifier we use (SVM) and found that it does not significantly impact training or inference time; other, relatively more simple classifiers such as kNN (k-Nearest Neighbors) may also be used instead, for further gains in speed. We compared all of these approaches under the same conditions (see subsection 2.4). Also, see Appendix D for details on calculation of training and inference time.
3.3 PERFORMANCE ON NEURAL TIME SERIES RECORDED FROM THE MACAQUE OFC
The OFC is involved in encoding and updating affective expectations. We used a behavioral task designed to study the neural mechanisms of how such expectations are encoded in the OFC of primates and how they may guide behavior under different conditions. The goal here was to determine whether TRAKR can be used to reliably distinguish complex neural time series patterns (Figure 4A; see also subsubsection 2.5.1 for more details).
A recording from a single electrode is shown together with three different behaviorally relevant epochs (rest, choice and reward periods; Figure 4B). We compared the performance of TRAKR with all other previously introduced methods on the task of trying to distinguish the neural time series patterns in these three epochs. It is important to note that we did not know a priori whether there is enough information encoded in the OFC recordings to allow for differentiating the three epochs. Thus, a high performing classifier can offer clues here as to what type of information is encoded in the brain.
We trained TRAKR on the neural time series corresponding to rest period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). The error signal was used as input to a classifier. We repeated this procedure for all trials in the dataset to obtain the averaged classification performance. We also calculated the Fast Fourier transform (FFT) of the signals and obtained the magnitude (power) in the α (0 − 12Hz), β (13 − 35Hz), and γ (36 − 80Hz) bands within the 3 epochs. We compared TRAKR against this FFT based classification too.
Again, with the exception of MLP which performed on par with TRAKR, we found that TRAKR outperformed all the other methods (AUC = 94%; p < 0.001; Figure 4C). TRAKR was able to distinguish the three epochs with high accuracy based on the neural signal, showing that there is enough information in the OFC to differentiate these patterns. It is important to note that based on the performance of lower performing approaches, such as DTW or Euc, one might have (wrongly) concluded that the OFC lacks enough information to solve this task. Overall, with its high per-
formance, TRAKR can be used as a reliable tool to differentiate time series patterns and generate hypotheses on the type of information represented in neural circuits.
We also investigated whether any of the methods could distinguish between match and mismatch trials based on the OFC signals (Figure 4D). For this purpose, we trained TRAKR on the neural time series corresponding to choice period from a particular trial, and used the other complete trials as test signals to obtain the error, E(t). We found that all methods performed at chance-level, indicating there is not enough information in the OFC recordings to differentiate the two. We also observed that the classification performance for the three epochs degrades over days (Figure 4E; blue & red solid lines), while that for match/mismatch trials consistently stays around chance-level (Figure 4E; blue & red dotted lines).
Lastly, we visualized different electrodes in the space spanned by the first three principal components of the reservoir’s activations (Appendix E). We fitted the reservoir to signal obtained from a particular electrode, froze the output weights, and projected other electrodes onto the first three principal components of reservoir activity. We found that electrodes trace out different paths in reservoir space. Thus, as a proof-of-concept, projections onto reservoir space can be used to visually inspect
the similarity of recordings from different electrodes, and identify differences that may be indicative of functionally meaningful sub-groupings that represent functionally coherent modules in the brain.
4 DISCUSSION
We have shown that TRAKR can distinguish time series patterns with high accuracy. TRAKR outperforms other approaches in classifying time-series data on a benchmark dataset, sequential MNIST, and in differentiating neural time series signals obtained from recordings in the macaque OFC. It performs on par with supervised neural networks (MLPs), while outperforming other types of echo state networks (twiESN) and commonly used distance measures such as Dynamic Time Warping (DTW). Meanwhile, we found that our approach is more robust to noise than other approaches, in particular supervised neural networks, and offers good performance in terms of training and inference time.
We found that TRAKR offers high accuracy at relatively lower training and inference times than other approaches with comparable accuracy such as supervised multi-layer neural networks. While TRAKR relies on a classifier on top of the error traces to perform the classification task, other classifiers such as kNN may be used to maximize training and inference speed, instead of the SVM classifier employed here. All approaches were compared in the same environment, but all neural network-based approaches may equally benefit from further gains in training and inference speed by optimising the code for deployment on GPUs. For this purpose, TRAKR can readily be implemented in JAX, a high-performance computing framework (Bradbury et al. (2018).
Meanwhile, other approaches based on echo-state networks, such as twiESN (Fawaz et al. (2019)), performed worse than TRAKR and showed higher train and inference times. This suggests our contribution is key to making reservoir networks a viable alternative to state-of-the-art approaches in time series classification.
While TRAKR and most other methods could distinguish three behaviorally relevant epochs based on the OFC signal, none of the methods were able to accurately distinguish match and mismatch trials. This indicates that there is enough information in the OFC to distinguish the three task periods, but not enough to differentiate match and mismatch trials. It is possible that receiving a better or worse reward than expected affected the neural signal in distinct/opposite ways, such that the effect was cancelled out on average. It is also possible that the difference in neural time-series patterns was only discernible if the reward was maximally different (much better or worse than expected). In the current task design, there were 4 different levels of reward (flavors) that the macaque associated with different pictures (subsubsection 2.5.1). The number of trials in which obtained was maximally different from expected reward was low and possibly not sufficient for accurate classification. Another possibility, supported by our results and corroborated by several studies (Stalnaker et al. (2018); McDannald et al. (2014); Takahashi et al. (2013); Kennerley et al. (2011)), is that OFC neural activity signals reward values but not reward prediction errors, which instead are mediated through the ventral tegmental area (VTA) in the midbrain.
We found that the classification performance decreased over recording sessions. This could mean that the difference between task epochs being classified decreased because of increased familiarity with the task. That is less likely, however, because the subject was well-trained prior to recordings. Instead, since the signal was recorded over a period of 35 days, the decrease in the classification performance could be a result of degrading signal quality, perhaps due to electrode impedance issues (Kozai et al. (2015a;b); Holson et al. (1998); Robinson & Camp (1991)).
5 CONCLUSION
There is a need for and strong interest in tools for the analysis of time-series data (Bhatnagar et al. (2021)). We show that TRAKR is a fast, accurate and robust tool for the classification of time-series patterns. Through its ease of use and low training and inference time, it is particularly suited for realtime applications where accurate decisions need to be made quickly and signal degradation or other artifacts necessitate frequent re-calibration, such as in clinical settings. TRAKR can also be used to distinguish neural time series patterns in the brain, shedding light on the information encoded in neural circuits and thus generating hypotheses for new experiments.
A TRAKR HYPERPARAMETERS
The recurrent weights Jij are weights from unit j to i. The recurrent weights are initially chosen independently and randomly from a Gaussian distribution with mean of 0 and variance given by g2/N . The input weights win are also chosen independently and randomly from the standard normal distribution.
An integration time constant τ = 1ms is used. We use gain g = 1.2 for all the networks.
The matrix P is not explicitly calculated but updated as follows:
P (t) = P (t− 1)− P (t− 1)ϕ(t)ϕ ′(t)P (t− 1)
1 + ϕ′(t)P (t− 1)ϕ(t)
The learning rate η is given by 1
1 + ϕ′(t)P (t)ϕ(t) .
The number of units used in the reservoir is generally N = 30.
B MACAQUE TRIAL DETAILS & DATA PRE-PROCESSING
Each trial was approximately 6.5s long, including different behaviorally relevant epochs and cues. The macaque performed approximately 550 trials within each task session (mean± sd : 562± 72). Of note, 80% of the trials were match trials within each task session.
ECoG data were acquired by a neural processing system (Ripple) at 30kHz and then resampled at 1kHz. The 128-channel data were first z-score normalized. Second-order butterworth bandstop IIR filters were used to remove 60Hz line noise and harmonics from the signal. We also used second-order Savitzky-Golay filters of window length 99 to smooth the data and remove very high frequency juice pump artifacts (> 150Hz). For most of the analysis here, we used the average of the 128-channel time series as an input to TRAKR.
C DETAILS ON SEQUENTIAL MNIST CLASSIFICATION
For measuring noise robustness, we added random independent Gaussian noise to the training digits (µ = 0 and varying standard deviation (σ)). The actual noise that was added (noise levels as depicted in Figure 3B) can be calculated as σ ∗ 255, with σ ∈ [0, 1]. The number 255 represents the maximal pixel value in the sequential digits.
D CALCULATION OF TRAINING AND INFERENCE TIME
The calculation of training time for TRAKR includes the time taken to train one sequence through TRAKR along with 10-fold cross validation time using SVM. For all the other methods, it is also the time taken to train using 10-fold cross validation. For MLP, the training is done using Keras which is optimized, whereas TRAKR has further room for optimization by implementing it in JAX etc.
The calculation of inference time for TRAKR includes the time taken to feed one test sequence into TRAKR and making predictions on a test set using SVM. For all the other methods, the time includes making predictions on the same test set.
E VISUALIZATION OF RESERVOIR ACTIVATIONS
Single electrode recordings were projected into the space spanned by the first three principal components of reservoir activations. The four electrodes trace out different trajectories in reservoir space, suggesting they capture potentially different neural dynamics, as shown in the figure below.
PC1 PC2
PC3
Electrode 1 Electrode 13
Electrode 76 Electrode 127 | 1. What is the focus and contribution of the paper regarding reservoir computing?
2. What are the strengths of the proposed approach, particularly its novelty and potential advantages?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any concerns or questions regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Review | Summary Of The Paper
Here, the authors propose a reservoir-computing based framework for time-series applications. The proposed methodology involves training the output units of the echo state network to recapitulate, or autoencode, a test signal. Next, time-series are fed to the network and the error signal between the time-series and the network's response is used as input to an SVM that is used for classification. Effectively the error signal is used as a distance metric.
The authors report that the network performs well on sequential MNIST and decoding of neural data. Additionally, they argue that a key advantage of their approach is that it is computationally lightweight – both training and inference are fast.
Review
Strengths:
The manuscript is straightforward and well-written
The notion of using the error signal as the input to a classifier as new and potentially interesting
The classification results are possibly compelling
Major issues:
A central claim of the paper is that echo state networks are both performant and computationally efficient. Firstly, to properly contextualize the accuracy of their technique, the authors should compare TRAKR against more state of the art methods. For instance, the author should consider implementing some of the methods from the Fawaz review they cite. Moreover, while deep-learning methods may be more expensive to train, they can be used as a more up-to-date benchmark. Additionally, this could present the authors with an opportunity to compare the performance of TRAKR with deep learning methods as a function of the size of the training dataset (I suspect TRAKR might look more favorable here).
Another major claim is that TRAKR is more "lightweight" than ensemble or deep-learning methods. Deep learning methods, especially for large network sizes, are particularly data-hungry in the training phase. However, inference with deep networks can be extremely fast, to the point of being sufficient for real-time applications. If one were to compare the inference time of a deep network on a GPU vs TRAKR, it is possible that TRAKR is not significantly faster than a better-performing deep network. This must be clarified since computational efficiency is touted as a major feature of TRAKR.
Minor issues:
Figure 4 what do the error bars repesent |
ICLR | Title
Structural Language Models for Any-Code Generation
Abstract
We address the problem of Any-Code-to-Code Generation (AnyC2C) – generating code given its surrounding code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyC2C that leverages the strict syntax of programming languages to model a code snippet as a tree – structural language modeling (SLM). SLM estimates the probability of the program’s abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated in this task , our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online.
1 INTRODUCTION
Generating source code requires reasoning over an unbounded number of syntactic structures and potential symbols. Previous approaches have avoided this issue by limiting the generation problem: program synthesis approaches (Manna and Waldinger, 1971) are tailored to domain-specific languages (Gulwani, 2011), semantic parsing approaches focus on highly templated datasets (Ling et al., 2016; Shin et al., 2019) or SQL (Yu et al., 2018; Dong and Lapata, 2018), while other recent approaches generate code in general languages like Java and C#, but severely restrict the syntax, vocabulary or nature of the generated expressions (Murali et al., 2017; Brockschmidt et al., 2019a).
We introduce the task of Any-Code-to-Code Generation (AnyC2C) – generating source code in a general-purpose programming language without any restriction on its vocabulary or structure. Specifically, we focus on generating code in context: given a program P and some part of the program p, predict p from the rest of the program P−=P\p. The only restriction we place is that the target pmust have a valid subtree within the program’s abstract syntax tree (AST). AnyC2C thus generalizes the restricted expression generation task of Brockschmidt et al. (2019a), where target code contains only primitive types and excludes user-defined functions. Figure 1 shows two such AnyC2C examples.
While a sequence-to-sequence (seq2seq) model with a copy mechanism works better than existing code generation approaches on AnyC2C (see Section 5), it ignores the structural information available from the code’s AST. We present a new approach that explicitly leverages the strict syntax of programming languages to model code snippets as trees – structural language modeling (SLM). SLM estimates the probability of the program’s AST by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. While prior work uses AST paths to read programs (Alon et al., 2019a), we generate code by producing the target AST node-by-node.
We evaluate SLM on Java AnyC2C benchmarks, where our model achieves a new state-of-the-art exact-match accuracy of 18.04% (previous SOTA: 16.93%). SLMs also outperform existing models on the restricted expression generation task of Brockschmidt et al. (2019a) in C# by a wide margin, 37.61% compared to 26.42%. Our ablation study reveals the importance of using AST paths for
both reading and generating code. Finally, we discuss the theoretical advantages of SLMs, and show how they generalize many previous structural approaches for code generation.
2 CODE GENERATION AS STRUCTURAL LANGUAGE MODELING
We model the task of Any-Code-to-Code Generation (AnyC2C) by computing the probability of a program Pr (P), similar to how a language model computes the probability of a natural language sentence. While language models typically assume a sequence as their input, our input is an abstract syntax tree AP . We thus introduce a structural language modeling approach (SLM) for code generation.
We first show a chain-rule decomposition of the tree’s probability Pr (AP) into a product of conditional node probabilities, and then describe our path-based model for computing the individual conditional probabilities. We explain how to construct a tree from local node predictions, and finally discuss how our approach differs from previous work on production-based tree generation.
Representing Code as a Tree A program P is a sequence of tokens that is unambiguously equivalent to an abstract syntax tree AP , where each node represents an element in the language (e.g. conditions, loops, variable declarations) from a set T . The AST’s leaves (terminals) have an additional user-defined value v ∈ V . Nonterminal nodes can have a varying number of children nodes. Decomposing the Probability of a Tree Given a tree AP , we first traverse the tree, depth-first,1 to induce an ordering over its nodes a0, . . . , a|AP | ∈ AP . We can now decompose the probability of a tree Pr (AP) using the chain rule, akin to the standard approach in language modeling:
Pr (AP) = ∏ t Pr (at|a<t) (1)
where a<t are all the nodes that were traversed before at.
In AnyC2C, part of the tree (AP− ) is already observed. Therefore, we order the nodes of AP− before the nodes of the target p, and compute only the conditional probabilities over the nodes in p, essentially conditioning on the observed tree AP− . Representing Partial Trees via Paths How can we represent the partial tree composed of a<t when computing Pr (at|a<t)? In regular language modeling, the structure is linear, and a<t is a sequence. One way to represent a partial tree is to linearize it according to the traversal order (Xiao et al., 2016); however, this could create artificially long distances between the current node at and ancestor nodes (e.g., the root a0). Another option is to use only the path from the root node to at (Rabinovich et al., 2017), but this ignores a lot of contextual information (e.g., sibling nodes).
We follow Alon et al. (2018) and use the set of paths from every leaf to the current node to expand, as well as the path Rt originating from the root. We denote the (candidate) node at time t as at, its (given) parent, which is the currently expanded node, by π (at), and the set of all paths as St:
St = {` π (at) |` ∈ leaves (a<t)} ∪ {a0 π (at)} (2) 1Depth-first ordering is common practice in tree generation (Maddison and Tarlow, 2014; Raychev et al.,
2016; Brockschmidt et al., 2019a), but our framework also allows for other orderings, in theory.
model generates the next node (denoted by a question mark: ? ) of path1, path2 and path3 using the root pathR. Dashed lines denote AST parent-child relations; solid lines denote AST paths.
where ` π (at) is the (only) path in the tree between a leaf ` and the current node to expand π (at), and Rt = a0 π (at) is the path from the root of the program to π (at), which represents the current, relative position of π (at) in the program (marked as R in Figure 2). Whereas prior work used whole paths (between two leaf nodes) to encode an AST (Alon et al., 2019b;a), our model observes partial paths (between a leaf and an intermediate node) and learns to extend them.
Figure 2 illustrates the traversal order of a subtree that represents the expression x > 1, as well as some of the paths used to compute the probability at each step. At each step, the probability of the next node is computed given the paths St from the root and every given leaf up to the current node to expand. Figure 2(d) shows how after the terminal node Name and its value x are given, path3 originating from this leaf is also used to compute the probability of the next nodes.
Our path-based approach generalizes previous approaches, such as parent feeding (Yin and Neubig, 2017), previous action encoding (Yin and Neubig, 2017), context nodes (Bielik et al., 2016), and some of the graph-edges of Brockschmidt et al. (2019a). See Section 8 for further discussion.
Generating Trees In sequence generation, the length of the generated sequence is controlled by generating a single EOS token to stop. When generating trees, we require a more sophisticated mechanism to control arity and depth. We augment AP in two ways to allow node-by-node generation. First, we add a special EOSnode node to every nonterminal to control for arity. Generating this node indicates that the parent node has no more children. Second, we decompose each terminal node nv into a sequence of terminal nodes Tv by splitting up the node’s value v into subtokens based on camel notation (Allamanis et al., 2015). For example, if v = toLowerCase, then Tv = to → lower → case → EOStok. We end each subtoken sequence with a special EOStok node to control for depth during generation. Figure 3 shows an example of both EOSnode and EOStok in action.
Node Trees vs. Production Trees While we predict a single node at each step, previous work (Iyer et al., 2018; Brockschmidt et al., 2019a) predicts a grammar production rule. This more direct grammatical representation decomposes the code in a way that often forces the model to predict with partial information. For instance, consider the expression str.Substring(3). The
model of Brockschmidt et al. (2019a) would first predict the rule Expr→Expr.Substring(Expr), and only then expand Expr→str and Expr→3; i.e., the model needs to predict the method name (Substring) before the invoking object (str). Further, the Substring method can get either one or two arguments, forcing the model to choose whether to use the one- or two-argument production rule in advance. Node generation, however, allows us to predict the presence of a function call and only then to predict its object, method name, and arguments, rather than predicting these a priori.
We note that there exist other approaches that generate an arbitrary number of child nodes with production rule-based models. For example, Rabinovich et al. (2017) used a “horizontal LSTM” to decide whether or not to generate another child; and Yin and Neubig (2018) presented a transition system with a “Reduce” action.
3 MODEL ARCHITECTURE
In the previous section, we described how we can generate code given the probabilities Pr (at|a<t), where a<t is represented by the set of partial AST paths St. Here, we present a neural model that estimates Pr (at|St). We first apply an LSTM-based path encoder to represent each path in St as a vector (Section 3.1). We then contextualize and aggregate the entire set into a single vector (Section 3.2). Finally, we predict the target node at by combining a limited output vocabulary with a syntactic copy mechanism (Section 3.3).
3.1 ENCODING AST PATHS
Given a partial AST path (node sequence n1, . . . , nk), our goal is to create a vector representation.
We first represent each node ni using embeddings. A subtoken node is represented by the index of its subtoken w in the embedding matrix Esubtoken; AST nodes are represented as a pair ni = (τ, κ) where τ is the node type, e.g. IfStatement, and κ is the node index among its sibling nodes. We represent node types using a learned embedding matrix Etype and the child indices using a learned matrix Eindex. The node’s vector representation is the concatenation of the type and index vectors.
e (ni) = { Esubtokenw ni is the subtoken w[ Etypeτ ;E index κ ] ni is the AST node (τ, κ)
(3)
We encode the entire path using a uni-directional LSTM stack, and take the final states:2
f (n1, . . . , nk) = LSTM(e (n1) , . . . , e (nk))
Given a set of partial paths S (omitting the iterator t for simplicity), we denote their encodings as H = { f (n1, . . . , nk) | (n1, . . . , nk) ∈ S}.
Efficient Computation When processing an entire tree, there are large overlaps between paths from different time steps. In particular, paths that originate from the same leaf share the same prefix. We therefore apply the LSTM on the prefix once, and cache the state across suffixes, speeding up both training and inference significantly. An example is shown in Figure 6 in the Appendix.
3.2 AGGREGATING MULTIPLE PATHS
Given the set of paths S leading up to the target node’s parent π(a), our goal is to represent S as a vector, in the context of predicting a. To do so, we introduce the aggregation function g (H, r, i). As its input, g takes the set of encoded paths H , the encoded root path r, and the child index i of the currently predicted child node a relatively to its parent.
We first contextualize the path encodings H using a transformer encoder (Vaswani et al., 2017).3 In parallel, we apply a non-linear transformation to the encoding of the root path r = f (R), in order to inform it that we wish to predict the i-th child of π(at):
Z = Transformer (H) r̃ =Wa · ReLU (Ci · r) (4) 2Replacing the LSTMs with transformers yielded similar results in preliminary experiments. 3Since H is a set, we do not use positional embeddings.
In this formulation, the parameter matrix Ci is used when the child index is i, while the parameter matrix Wa is used for every instance.
We then compute attention over the set of contextualized path encodings Z with the index-informed root-path encoding r̃ as the query, pass the weighted average z̃ and the root-path encoding r̃ through another fully-connected layer and denote the resulting vector representation as h̃:
α = softmax (Z · r̃) z̃ = ∑ j αj · Zj h̃ = g (H, r, i) = ReLU (Wg [z̃; r̃]) (5)
3.3 PREDICTING WITH A SYNTACTIC COPY MECHANISM
We can now predict a from the representation h̃. If the target node’s parent π(a) is a nonterminal AST node, then a must be an AST node; otherwise, a is a subtoken.
Predicting AST Nodes We predict a using a softmax over the node type embeddings Etype: Pr (a|S) = softmax ( Etype · h̃ ) (π(a) is a nonterminal) (6)
Predicting Subtokens Programs repeatedly refer to previously declared symbols, resulting in highly repetitive usage of identifiers. We therefore add a copy mechanism (Gu et al., 2016) to allow our model to predict either entire tokens or subtokens that already exist in the context. As we show in Section 6, copying greatly improves our model’s performance. For brevity, we describe how entire tokens are copied, and elaborate on the copy of subtokens in Appendix C. We score each leaf ` using a bilinear function (Wc) between its path’s encodingH` and h̃. At the same time, we score the token w, which is the token associated with `, from a limited vocabulary using the inner product between its representation in the subtoken embedding matrix Esubtoken and h̃.
scopy (`) = H` ·Wc · h̃ sgen (w) = Esubtokenw · h̃ (7)
The scores scopy and sgen are then summed over different occurrences that correspond to the same symbol, and subsequently normalized via softmax. A key difference from previous work (Gu et al., 2016; Yin and Neubig, 2017) is that our copy mechanism uses the syntactic relation between the source and the target (AST path), rather than their sequential relation. Yin et al. (2019) proposed a graph-based copying mechanism that is capable of copying both tokens and subtrees from the context.
4 EXPERIMENTAL SETUP
4.1 BENCHMARKS
Any-Code Generation (AnyC2C): Java We take the Java-small dataset of Alon et al. (2019a), which contains 11 GitHub projects, broken down to a single method per example, and split to train/dev/test by project to reduce code overlap. This dataset was found to contain the least code duplication by Allamanis (2018b). We create AnyC2C examples by selecting every expression larger than a single AST node as the target, using the remainder of the method as the context. We remove methods that contain the word “test” in their body or file name, and remove methods longer than 20 lines to avoid auto-generated code. To make the task even harder, we remove examples where the target subtree appear as-is in the context. This dataset contains 1.3M/10k/20k train/dev/test examples. The average number of targets for our model is 10.8; for the seq2seq baselines the average is 7.8 targets; if we modeled our targets using production rules, the average would have been 7.9.
Restrict Code Generation (RestrictC2C): C# Since the restricted expression generation (RestrictC2C) dataset of Brockschmidt et al. (2019a) is not publicly available, we consulted with Brockschmidt et al. directly and use the dataset of Allamanis et al. (2018a). This dataset contains 30 GitHub projects broken down to one method per example, and use the “unseen projects test” split. To create RestrictC2C examples, we use the code of Brockschmidt et al. (2019a) which filters out examples where the targets contain non-primitive types or user-defined functions. We extract the exact same types of limited expressions. This dataset contains 16k/8k/3k train/dev/test examples.
Detailed statistics of all datasets are provided in Appendix A.
Metrics We follow Brockschmidt et al. (2019a) and report exact match accuracy at 1 and 5. We also introduce a new tree@k metric, which counts a prediction as correct if the entire tree structure, ignoring leaf values, is identical to the tree of the ground truth. For example, the expressions x > 1 and y > 2 would not count as identical in exact match, but would count as “tree-match identical” because both express that an identifier is greater than an integer (NAME > INT). tree@k is interesting because it allows us to tease apart the model’s syntactic errors from incorrect subtoken predictions.
4.2 BASELINES
We compare our model to a variety of original implementations and adaptations of existing models. We put significant effort to perform a fair comparison, including adding a copy mechanism to the NMT baselines and subtokenization as in our model. We adapt strong baselines from the literature to our task, even if they were designed to different tasks such as NL→code and code→NL. We re-train all the following baselines on the same datasets as our model.
Neural Machine Translation We use standard autoregressive sequence-to-sequence NMT baselines, in which we subtokenize the given code snippet, replace the target in the source with a special PRED symbol, and train the network to predict the target as a sequence of subtokens. Transformerbase+copy (Vaswani et al., 2017) uses the implementation of OpenNMT (Klein et al., 2017) with a copy mechanism (Gu et al., 2016). Transformer-small+copy uses dmodel = 256, dff = 1024, and 4 self attention heads per layer. BiLSTM→LSTM+copy is a d = 512 2-layer bidirectional LSTM encoder-decoder with attention (Luong et al., 2015). seq2tree+copy follows Aharoni and Goldberg (2017) and learns to generate the linearized, subtokenized target AST, with the same architecture as BiLSTM→LSTM+copy. Java-specific Baselines We used the original implementation of Iyer et al. (2018), and also their seq2prod baseline which is a re-implementation of Yin and Neubig (2017); these are designed for NL→code tasks, in which we feed the (sub)tokenized code context as the NL input. The model of Iyer et al. (2018) is designed to get additional input of the available variables and their types, for which we do not feed types. While in theory these models could also be applied to other languages, their implementation only supports Java.
C#-specific Baselines We compare our model to GNN→NAG using the original implementation of Brockschmidt et al. (2019a) which contains additional improvements using ideas from Cvitkovic et al. (2019). Bielik et al. (2016) kindly trained and tested their non-neural PHOG model on our C# dataset. We note that PHOG does not have an explicit copy mechanism, and considers only context to the left of the target code, while we consider also context to the right. Extending PHOG to use copying and considering more context could potentially improve its results.
In both Java and C#, we compare to code2seq (Alon et al., 2019a), which is a strong code→NL model and train it to generate the target code as a sequence of subtokens.
4.3 IMPLEMENTATION AND HYPERPARAMETER SETTINGS
Architecture We use embeddings of size 512, 2 layers of LSTMs with 256 units, and 4 transformer layers with 8 attention heads. We kept a subtoken vocabulary of size 1,000 to encourage the model to learn to copy; larger vocabularies did not show an improvement. These resulted in a very lightweight model of only 15M trainable parameters, which is close to Transformer-small (11.8M parameters). In comparison, the Transformer-base model had more than 45M trainable parameters.
Training We train the model end-to-end using the cross entropy objective and the Adam optimizer (Kingma and Ba, 2014), an initial learning rate of 10−4 decayed by a factor of 0.95 every 20k steps. We bucket examples based on the number of predictions in the target subtree (nodes + subtokens + EOS symbols), and vary the batch size such that each batch contains about 512 targets. We train the model to prefer copying entire tokens rather than copying subtokens, if possible. We apply dropout of 0.25 in the Transformer layers, and a recurrent dropout of 0.5 in the LSTMs.
Inference We perform beam search with width of 5, and optimize for accuracy@1.
Ablation acc@1 acc@5
Paths→Seq 12.95 18.52 Seq→Path 12.12 17.12 Paths→Paths 17.63 24.62 No Root Att 14.43 18.48 No Copy 10.72 15.70
SLM (original model) 18.04 24.83
Table 3: Ablations on AnyC2C in Java.
5 RESULTS
Any-Code Generation: Java Table 1 shows that our SLM achieves over 1.1% and 0.78% better acc@1 and acc@5 (respectively) over the two strongest baselines. The improvement over Transformer-small, which is closer to our model in the number of parameters, is even higher: over 3.8% and 3.4% in acc@1 and acc@5.
In general, the NMT baselines performed better than code-specific baselines. We hypothesize that the reason is that the NMT baselines are more generic, while the code-specific baselines are designed for different tasks: seq2prod is designed for tasks which involve generating code given natural language input; Iyer et al. (2018) additionally expects all member methods, variables, and their types as input; code2seq is designed to generate sequences rather than code, and does not have a copy mechanism. An approximation of code2seq with a copy mechanism is presented in Section 6.
Interestingly, the syntactically-informed seq2tree baseline achieved the highest tree@k among the baselines, while our model achieved higher acc@k and tree@k. This shows that leveraging the syntax can be beneficial in NMT baselines as well.
Restricted Code Generation (RestrictC2C): C# Table 2 shows the results for the RestrictC2C task in C#, where seq2seq+copy is the BiLSTM→LSTM+copy model which performed the best among the Java baselines. We first observe that the seq2seq+copy and the seq2tree+copy baselines outperform the GNN→NAG of Brockschmidt et al. (2019a), who introduced this task. Although Brockschmidt et al. (2019a) did compare to a seq2seq baseline, their GNN→NAG model could copy symbols from the context, but their baseline did not. To conduct a fair comparison with our SLM model, we equip the seq2seq and seq2tree baselines with a copy mechanism. Even though the seq2seq+copy and the seq2tree+copy baselines perform substantially better than the state of the art in this setting, our SLM model is able to go beyond, achieving significant gains over all models.
Examples for predictions made by our model and baselines can be found in Appendices D and E.
6 ABLATION STUDY
To understand the importance of the various components and design decisions in our model, we conducted an extensive ablation study on the AnyC2C task in Java.
Paths→Seq follows code2seq (Alon et al., 2019a) and separates the model to an encoder and a decoder, where the decoder generates the target subtree as a sequence of subtokens. The main difference from code2seq is that Paths→Seq includes a copy mechanism, as in our SLM model. Seq→Path follows Rabinovich et al. (2017) and separates the model to an encoder and a decoder (including a copy mechanism), where the encoder encodes the context as a sequence of subtokens using a BiLSTM, and the decoder uses only the root path and the index of the generated child.
Paths→Paths uses separate encoder and decoder which both are AST-path based. These encoder and decoder have different parameters, unlike our SLM model which models the context and the prediction using the same components.
No Root Attention uses max pooling instead of attention in aggregating multiple paths (see Section 3.2). The index-informed path from the root to the target’s parent (R in Figure 2) is concatenated with the result, instead of being used as attention query.
No Copy replaces copy mechanism with a much larger vocabulary (25k subtokens instead of 1k).
Table 3 shows the results of these alternatives. The significantly lower results of Paths→Seq and Seq→Path show the great benefit of using a unified structural language model, instead of separate encoder and decoder components. While this separation between encoders and decoders might be necessary in semantic parsing (Rabinovich et al., 2017; Dong and Lapata, 2018), NL→code (Yin and Neubig, 2017) and code→NL (Alon et al., 2019a; Fernandes et al., 2019) tasks because of the different modalities of the input and the output, this separation may hurt performance when the output is essentially a missing part of the input’s AST. As expected, Paths→Seq performs better than code2seq (Table 1), as it includes a copy mechanism and code2seq does not.
As SLM performs better than Paths→Paths, this ablation shows the importance of joint modeling of the context and the target subtree by parameter tying. Each of Paths→Paths and the seq2seq baselines (Table 1) performs better than Paths→Seq and Seq→Path; this shows the importance of using the same type of encoder and decoder for the AnyC2C task, rather than combining “an optimal encoder” with “an optimal decoder”. Paths→Paths performs better than the seq2seq baselines (Table 1), showing the advantage of using paths over textual sequences, even without parameter tying.
No Root Attention degrades acc@1 and acc@5 by 3.6% to 6.3%. This shows that dynamically attending to the context paths given the current root path is crucial, even though the root path is necessarily included as a sub-path of other paths in the set St which go through self-attention. Not using a copying mechanism results in a degradation of 7.3% to 9.1%. Programs use symbols and identifiers repetitively, thus the ability to copy symbols from the context is crucial for this task. For this reason, we included a copying mechanism in all NMT baselines in Section 4.
protected void checkRpcAdminAccess() throws IOException, AccessControlException {
UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); UserGroupInformation zkfcUgi = UserGroupInformation.getLoginUser(); if (adminAcl.isUserAllowed(ugi) ||
ugi.getShortUserName().equals( zkfcUgi.getShortUserName() )) {
LOG.info("Allowed RPC access from " + ugi + " at " + Server.getRemoteAddress());
return; }
String msg = "Disallowed RPC access from " + ugi + " at " + Server.getRemoteAddress() + ". Not listed in " + DFSConfigKeys.DFS_ADMIN; LOG.warn(msg); throw new AccessControlException(msg);
}
True ref: zkfcUgi.getShortUserName()
SLM top-5 candidates: zkfcUgi.getShortUserName() (11.7%) (exact match) DFSConfigKeys.DFS (4.5%) zkfcUgi.getUserName() (2.6%) (tree-match) zkfcUgi.getUser() (1.7%) (tree-match) zkfcUgi.getUserId() (0.6%) (tree-match)
Entirely copied tokens are marked in brown; unknown copied subtokens are marked in blue; in-vocabulary subtokens are marked in black; subtokens that are both in-vocabulary and copied from context are marked in purple.
Figure 5: A Java AnyC2C example from our test set along with the predictions of our model. The predictions of the baselines are shown in Figure 7 in Appendix D.
7 QUALITATIVE ANALYSIS
7.1 CORRECT TREE, INCORRECT IDENTIFIER ASSIGNMENT
As shown in Section 5, there is a gap between acc@k and tree@k across all models: when ignoring identifier values and considering only the tree structure, the accuracy is significantly higher. Our SLM model performs better than all baselines in acc@k (Table 1); further, our model also shows a greater potential for improvement in its tree@k results which are much higher than the baselines.
We focus on the studying the cases where the tree structure was predicted correctly, but the model failed to generate the code exactly including names. Figure 4 shows a representative example for this case: the ground truth event.getTaskId() was predicted correctly only as the fifth candidate; nevertheless, all top-5 candidates are a “tree-match” since all of them express a method which is invoked on an object without arguments, of the form: NAME.NAME(). Generating the correct method name [get,task,id] is very difficult in this case, since neither getTaskId nor TaskId appear in the context and there is no apparent hint for them.
7.2 USEFULNESS OF COPY MECHANISM
As shown in Section 6, the ability to copy is crucial for the AnyC2C task, because of the repetitive use of identifiers and symbols in programs. Figure 5 shows a representative example for the necessity of the copy mechanism: generating the ground truth zkfcUgi.getShortUserName() is feasible only thanks to the copy mechanism, since zkfc is obviously an UNK subtoken which was not observed in the training data.
In this case, since both zkfcUgi and getShortUserName appear in context, both were copied as entire tokens, rather than generated using subtokens. This example also shows how the ability to copy entire tokens ease the generation process by reducing the number of target symbols (our SLM model is able to copy and combine single subtokens as well).
8 RELATED WORK
Generalizing Previous Approaches Our approach frames code generation as predicting the next node in all partial AST paths. This simple framing generalizes most previous work, without handcrafted special edges and actions:
• All models that use information about ancestor nodes only (Rabinovich et al., 2017; Maddison and Tarlow, 2014), as well as the “Parent Feeding” of Yin and Neubig (2017), are generalized by our model, since all paths that go into a node at pass through its parent, and the path from the rootRt (Figure 2) is used as the attention query. • The “previous action encoding” of Yin and Neubig (2017) is also a special case of our approach, because St contains the paths starting from the previously expanded leaves of Ap into the currently expanded node π (at), such as path3 in Figure 2(e). • The “context node” of PHOG (Bielik et al., 2016) is just one of the previously-traversed leaf nodes in a<t. Thus, not only that our model conditions on this context node as well, our model also takes into account the syntactic relation, i.e., the path, between the context and π (at). Moreover, while PHOG conditions on a single leaf, SLMs condition on every leaf in a<t. • Finally, Brockschmidt et al. (2019a) define special graph edges (e.g., “NextSib” and “Child”) to capture relations on the AST. Most of these relations can be expressed as partial AST paths.
Program Generation Learning to generate programs is one of the oldest problems in machine learning (Waldinger and Lee, 1969) and has been considered by some as the “holy grail of computer science” (Pnueli and Rosner, 1989; Gulwani et al., 2017). Typically, the task is to generate a program given some form of input or context, such as complete formal specifications (Green, 1981) or inputoutput examples (Gulwani, 2011; Devlin et al., 2017; Parisotto et al., 2017). While these approaches work well in some cases, they are bounded to DSLs that prevent them from being applied to realistic, general-purpose code. Maddison and Tarlow (2014) and Amodio et al. (2017) generate supposedly general-purpose code in a modern programming language, but do not deal with the challenge of fitting the code to a given context. Murali et al. (2017) generate code conditioned on a set of APIs; they state that their approach is thus intrinsically limited to generate API-heavy programs and is unusable for general, logical programs lacking external calls. Further, their generated programs are in an only ”Java-like” language. Yin and Neubig (2017), Iyer et al. (2018) and Rabinovich et al. (2017) generated general-purpose code as well, but for another task of generating code given natural language description. Yin et al. (2019) generated “any code” given an encoded edit that needs to be applied to a given code snippet.
Other work used datasets that are either small (Ling et al., 2016), containing highly aligned examples (Oda et al., 2015; Chen et al., 2018), limited-purpose languages like SQL (Yu et al., 2018), or general-purpose but containing eminently templated programs (Ling et al., 2016). Brockschmidt et al. (2019a) limit their model to generate only expressions of primitive types or arrays of these; use a closed vocabulary; and ignore expressions containing user-defined functions, because function names are hardcoded in their syntax production rules. In this paper, we lift these constraints and allow any, general-purpose, generation of code, of all types and containing any names. Our work is also related to Habash (2004), who used structural n-grams over dependency trees for statistical machine translation (SMT).
9 CONCLUSION
We presented a novel approach for generating code given surrounding context: computing the probability of an AST using a structural language model. We show that our approach generalizes most previous work in this area, while reaching state-of-the-art performance on of challenging benchmarks. We are eager to see future work advance SLMs further, and apply them to other real-life coding applications as well as other structured-data domains.
A DATA STATISTICS
Table 4 shows some statistics of our used datasets. In Java: for the validation set, we randomly sampled 10, 000 examples from the raw dev set; for the test set, we randomly sampled 20, 000 examples from the raw test set.
B CODE GENERATION PSEUDOCODE
Algorithm 1 shows the pseudocode for our depth-first, left-to-right, code generation approach. We keep a stack (line 1) which is initialized with an initial node to expand. We loop while the stack is not empty (line 3), and pop (line 4) the next node to expand at each step. If the node to expand is a nonterminal, we predict a child node (line 7). If the node is a terminal to a subtoken, we predict a subtoken from a vocabulary or copy (line 7). If the predicted child node, whether AST node or subtoken, can be further expanded (line 10), it is pushed back to the stack (line 12).
C COPYING SINGLE SUBTOKENS
In addition to scoring the entire token to be copied, we also score each of the subtokens composing it according to their position. For each position i, we add a scoring function scopyi , such that scopyi (`) produces the copying score of the i’th subtoken of `, which we denote as `i:
sw = sgen (w) + ∑
val(`)=w scopy token (`) + ∑ i ∑ val(`i)=w scopyi (`) (8)
Pr (a|S) = softmax (s) (9)
Where scopy token is the scoring function of copying the entire token, described in Section 3.3.
For example, a token of getX is scored entirely using scopy token; each of its subtokens, get and X, are scored using scopy1 and scopy2 respectively. That is, the model can either copy the entire token, or copy only some of its subtokens. This ability is especially useful in generating a name like setX, where getX appears in the context, and X is any unknown, user-defined, subtoken; the model learns to generate set from the vocabulary, and copy only the subtoken X.
D JAVA EXAMPLES
Figures 5-12 contain examples from our test set for the AnyC2C task in Java, along with the prediction of our model and some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons.
E C# EXAMPLES
Figures 13-20 contain examples from our test set for the RestrictC2C task in C# along with the prediction of our model some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons. | 1. What is the main contribution of the paper in the field of code generation?
2. What are the limitations of the proposed approach, particularly regarding its applicability to real-world scenarios?
3. Do you have any concerns about the evaluation metrics used in the paper?
4. What are the potential ways to improve the performance of the proposed model? | Review | Review
The paper proposes a model to address the Any-Code Generation (AnyGen) task, which basically to fill missing code from a given program. The model makes use of partial Abstract Syntax Tree (AST) as input. The model learns representation for partial AST paths and use the learnt representation to generate AST node at masked steps. The conducted experiments show that using AST paths from root and leaves are good for AST node generation, but whether those inputs are robust and sufficient should be further explored.
There are some restrictions to the method, for example, the input is only a single function, and the missing expression is not that complex. Nevertheless this work presents a novel method towards code generation. The paper also introduces a new metric to evaluate the prediction accuracy for generated expressions. Writing is clear. Evaluation is fairly comprehensive.
Questions:
1. Did the author test the method without the camel notation assumption, i.e. the data contains non-camel notation or mixed notations?
2. In the Restrict Code Generation test, it seems that the author filters out non-primitive types and user-defined functions. Therefore, does the experiment on Java-small dataset fully show the proposed model’s strength?
3. Can the author explain why the SLM model fail on Figure 6? Is it because of dividing token into sub tokens?
4. How big is the token vocabulary? How does the vocab size affect the performance? |
ICLR | Title
Structural Language Models for Any-Code Generation
Abstract
We address the problem of Any-Code-to-Code Generation (AnyC2C) – generating code given its surrounding code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyC2C that leverages the strict syntax of programming languages to model a code snippet as a tree – structural language modeling (SLM). SLM estimates the probability of the program’s abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated in this task , our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online.
1 INTRODUCTION
Generating source code requires reasoning over an unbounded number of syntactic structures and potential symbols. Previous approaches have avoided this issue by limiting the generation problem: program synthesis approaches (Manna and Waldinger, 1971) are tailored to domain-specific languages (Gulwani, 2011), semantic parsing approaches focus on highly templated datasets (Ling et al., 2016; Shin et al., 2019) or SQL (Yu et al., 2018; Dong and Lapata, 2018), while other recent approaches generate code in general languages like Java and C#, but severely restrict the syntax, vocabulary or nature of the generated expressions (Murali et al., 2017; Brockschmidt et al., 2019a).
We introduce the task of Any-Code-to-Code Generation (AnyC2C) – generating source code in a general-purpose programming language without any restriction on its vocabulary or structure. Specifically, we focus on generating code in context: given a program P and some part of the program p, predict p from the rest of the program P−=P\p. The only restriction we place is that the target pmust have a valid subtree within the program’s abstract syntax tree (AST). AnyC2C thus generalizes the restricted expression generation task of Brockschmidt et al. (2019a), where target code contains only primitive types and excludes user-defined functions. Figure 1 shows two such AnyC2C examples.
While a sequence-to-sequence (seq2seq) model with a copy mechanism works better than existing code generation approaches on AnyC2C (see Section 5), it ignores the structural information available from the code’s AST. We present a new approach that explicitly leverages the strict syntax of programming languages to model code snippets as trees – structural language modeling (SLM). SLM estimates the probability of the program’s AST by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. While prior work uses AST paths to read programs (Alon et al., 2019a), we generate code by producing the target AST node-by-node.
We evaluate SLM on Java AnyC2C benchmarks, where our model achieves a new state-of-the-art exact-match accuracy of 18.04% (previous SOTA: 16.93%). SLMs also outperform existing models on the restricted expression generation task of Brockschmidt et al. (2019a) in C# by a wide margin, 37.61% compared to 26.42%. Our ablation study reveals the importance of using AST paths for
both reading and generating code. Finally, we discuss the theoretical advantages of SLMs, and show how they generalize many previous structural approaches for code generation.
2 CODE GENERATION AS STRUCTURAL LANGUAGE MODELING
We model the task of Any-Code-to-Code Generation (AnyC2C) by computing the probability of a program Pr (P), similar to how a language model computes the probability of a natural language sentence. While language models typically assume a sequence as their input, our input is an abstract syntax tree AP . We thus introduce a structural language modeling approach (SLM) for code generation.
We first show a chain-rule decomposition of the tree’s probability Pr (AP) into a product of conditional node probabilities, and then describe our path-based model for computing the individual conditional probabilities. We explain how to construct a tree from local node predictions, and finally discuss how our approach differs from previous work on production-based tree generation.
Representing Code as a Tree A program P is a sequence of tokens that is unambiguously equivalent to an abstract syntax tree AP , where each node represents an element in the language (e.g. conditions, loops, variable declarations) from a set T . The AST’s leaves (terminals) have an additional user-defined value v ∈ V . Nonterminal nodes can have a varying number of children nodes. Decomposing the Probability of a Tree Given a tree AP , we first traverse the tree, depth-first,1 to induce an ordering over its nodes a0, . . . , a|AP | ∈ AP . We can now decompose the probability of a tree Pr (AP) using the chain rule, akin to the standard approach in language modeling:
Pr (AP) = ∏ t Pr (at|a<t) (1)
where a<t are all the nodes that were traversed before at.
In AnyC2C, part of the tree (AP− ) is already observed. Therefore, we order the nodes of AP− before the nodes of the target p, and compute only the conditional probabilities over the nodes in p, essentially conditioning on the observed tree AP− . Representing Partial Trees via Paths How can we represent the partial tree composed of a<t when computing Pr (at|a<t)? In regular language modeling, the structure is linear, and a<t is a sequence. One way to represent a partial tree is to linearize it according to the traversal order (Xiao et al., 2016); however, this could create artificially long distances between the current node at and ancestor nodes (e.g., the root a0). Another option is to use only the path from the root node to at (Rabinovich et al., 2017), but this ignores a lot of contextual information (e.g., sibling nodes).
We follow Alon et al. (2018) and use the set of paths from every leaf to the current node to expand, as well as the path Rt originating from the root. We denote the (candidate) node at time t as at, its (given) parent, which is the currently expanded node, by π (at), and the set of all paths as St:
St = {` π (at) |` ∈ leaves (a<t)} ∪ {a0 π (at)} (2) 1Depth-first ordering is common practice in tree generation (Maddison and Tarlow, 2014; Raychev et al.,
2016; Brockschmidt et al., 2019a), but our framework also allows for other orderings, in theory.
model generates the next node (denoted by a question mark: ? ) of path1, path2 and path3 using the root pathR. Dashed lines denote AST parent-child relations; solid lines denote AST paths.
where ` π (at) is the (only) path in the tree between a leaf ` and the current node to expand π (at), and Rt = a0 π (at) is the path from the root of the program to π (at), which represents the current, relative position of π (at) in the program (marked as R in Figure 2). Whereas prior work used whole paths (between two leaf nodes) to encode an AST (Alon et al., 2019b;a), our model observes partial paths (between a leaf and an intermediate node) and learns to extend them.
Figure 2 illustrates the traversal order of a subtree that represents the expression x > 1, as well as some of the paths used to compute the probability at each step. At each step, the probability of the next node is computed given the paths St from the root and every given leaf up to the current node to expand. Figure 2(d) shows how after the terminal node Name and its value x are given, path3 originating from this leaf is also used to compute the probability of the next nodes.
Our path-based approach generalizes previous approaches, such as parent feeding (Yin and Neubig, 2017), previous action encoding (Yin and Neubig, 2017), context nodes (Bielik et al., 2016), and some of the graph-edges of Brockschmidt et al. (2019a). See Section 8 for further discussion.
Generating Trees In sequence generation, the length of the generated sequence is controlled by generating a single EOS token to stop. When generating trees, we require a more sophisticated mechanism to control arity and depth. We augment AP in two ways to allow node-by-node generation. First, we add a special EOSnode node to every nonterminal to control for arity. Generating this node indicates that the parent node has no more children. Second, we decompose each terminal node nv into a sequence of terminal nodes Tv by splitting up the node’s value v into subtokens based on camel notation (Allamanis et al., 2015). For example, if v = toLowerCase, then Tv = to → lower → case → EOStok. We end each subtoken sequence with a special EOStok node to control for depth during generation. Figure 3 shows an example of both EOSnode and EOStok in action.
Node Trees vs. Production Trees While we predict a single node at each step, previous work (Iyer et al., 2018; Brockschmidt et al., 2019a) predicts a grammar production rule. This more direct grammatical representation decomposes the code in a way that often forces the model to predict with partial information. For instance, consider the expression str.Substring(3). The
model of Brockschmidt et al. (2019a) would first predict the rule Expr→Expr.Substring(Expr), and only then expand Expr→str and Expr→3; i.e., the model needs to predict the method name (Substring) before the invoking object (str). Further, the Substring method can get either one or two arguments, forcing the model to choose whether to use the one- or two-argument production rule in advance. Node generation, however, allows us to predict the presence of a function call and only then to predict its object, method name, and arguments, rather than predicting these a priori.
We note that there exist other approaches that generate an arbitrary number of child nodes with production rule-based models. For example, Rabinovich et al. (2017) used a “horizontal LSTM” to decide whether or not to generate another child; and Yin and Neubig (2018) presented a transition system with a “Reduce” action.
3 MODEL ARCHITECTURE
In the previous section, we described how we can generate code given the probabilities Pr (at|a<t), where a<t is represented by the set of partial AST paths St. Here, we present a neural model that estimates Pr (at|St). We first apply an LSTM-based path encoder to represent each path in St as a vector (Section 3.1). We then contextualize and aggregate the entire set into a single vector (Section 3.2). Finally, we predict the target node at by combining a limited output vocabulary with a syntactic copy mechanism (Section 3.3).
3.1 ENCODING AST PATHS
Given a partial AST path (node sequence n1, . . . , nk), our goal is to create a vector representation.
We first represent each node ni using embeddings. A subtoken node is represented by the index of its subtoken w in the embedding matrix Esubtoken; AST nodes are represented as a pair ni = (τ, κ) where τ is the node type, e.g. IfStatement, and κ is the node index among its sibling nodes. We represent node types using a learned embedding matrix Etype and the child indices using a learned matrix Eindex. The node’s vector representation is the concatenation of the type and index vectors.
e (ni) = { Esubtokenw ni is the subtoken w[ Etypeτ ;E index κ ] ni is the AST node (τ, κ)
(3)
We encode the entire path using a uni-directional LSTM stack, and take the final states:2
f (n1, . . . , nk) = LSTM(e (n1) , . . . , e (nk))
Given a set of partial paths S (omitting the iterator t for simplicity), we denote their encodings as H = { f (n1, . . . , nk) | (n1, . . . , nk) ∈ S}.
Efficient Computation When processing an entire tree, there are large overlaps between paths from different time steps. In particular, paths that originate from the same leaf share the same prefix. We therefore apply the LSTM on the prefix once, and cache the state across suffixes, speeding up both training and inference significantly. An example is shown in Figure 6 in the Appendix.
3.2 AGGREGATING MULTIPLE PATHS
Given the set of paths S leading up to the target node’s parent π(a), our goal is to represent S as a vector, in the context of predicting a. To do so, we introduce the aggregation function g (H, r, i). As its input, g takes the set of encoded paths H , the encoded root path r, and the child index i of the currently predicted child node a relatively to its parent.
We first contextualize the path encodings H using a transformer encoder (Vaswani et al., 2017).3 In parallel, we apply a non-linear transformation to the encoding of the root path r = f (R), in order to inform it that we wish to predict the i-th child of π(at):
Z = Transformer (H) r̃ =Wa · ReLU (Ci · r) (4) 2Replacing the LSTMs with transformers yielded similar results in preliminary experiments. 3Since H is a set, we do not use positional embeddings.
In this formulation, the parameter matrix Ci is used when the child index is i, while the parameter matrix Wa is used for every instance.
We then compute attention over the set of contextualized path encodings Z with the index-informed root-path encoding r̃ as the query, pass the weighted average z̃ and the root-path encoding r̃ through another fully-connected layer and denote the resulting vector representation as h̃:
α = softmax (Z · r̃) z̃ = ∑ j αj · Zj h̃ = g (H, r, i) = ReLU (Wg [z̃; r̃]) (5)
3.3 PREDICTING WITH A SYNTACTIC COPY MECHANISM
We can now predict a from the representation h̃. If the target node’s parent π(a) is a nonterminal AST node, then a must be an AST node; otherwise, a is a subtoken.
Predicting AST Nodes We predict a using a softmax over the node type embeddings Etype: Pr (a|S) = softmax ( Etype · h̃ ) (π(a) is a nonterminal) (6)
Predicting Subtokens Programs repeatedly refer to previously declared symbols, resulting in highly repetitive usage of identifiers. We therefore add a copy mechanism (Gu et al., 2016) to allow our model to predict either entire tokens or subtokens that already exist in the context. As we show in Section 6, copying greatly improves our model’s performance. For brevity, we describe how entire tokens are copied, and elaborate on the copy of subtokens in Appendix C. We score each leaf ` using a bilinear function (Wc) between its path’s encodingH` and h̃. At the same time, we score the token w, which is the token associated with `, from a limited vocabulary using the inner product between its representation in the subtoken embedding matrix Esubtoken and h̃.
scopy (`) = H` ·Wc · h̃ sgen (w) = Esubtokenw · h̃ (7)
The scores scopy and sgen are then summed over different occurrences that correspond to the same symbol, and subsequently normalized via softmax. A key difference from previous work (Gu et al., 2016; Yin and Neubig, 2017) is that our copy mechanism uses the syntactic relation between the source and the target (AST path), rather than their sequential relation. Yin et al. (2019) proposed a graph-based copying mechanism that is capable of copying both tokens and subtrees from the context.
4 EXPERIMENTAL SETUP
4.1 BENCHMARKS
Any-Code Generation (AnyC2C): Java We take the Java-small dataset of Alon et al. (2019a), which contains 11 GitHub projects, broken down to a single method per example, and split to train/dev/test by project to reduce code overlap. This dataset was found to contain the least code duplication by Allamanis (2018b). We create AnyC2C examples by selecting every expression larger than a single AST node as the target, using the remainder of the method as the context. We remove methods that contain the word “test” in their body or file name, and remove methods longer than 20 lines to avoid auto-generated code. To make the task even harder, we remove examples where the target subtree appear as-is in the context. This dataset contains 1.3M/10k/20k train/dev/test examples. The average number of targets for our model is 10.8; for the seq2seq baselines the average is 7.8 targets; if we modeled our targets using production rules, the average would have been 7.9.
Restrict Code Generation (RestrictC2C): C# Since the restricted expression generation (RestrictC2C) dataset of Brockschmidt et al. (2019a) is not publicly available, we consulted with Brockschmidt et al. directly and use the dataset of Allamanis et al. (2018a). This dataset contains 30 GitHub projects broken down to one method per example, and use the “unseen projects test” split. To create RestrictC2C examples, we use the code of Brockschmidt et al. (2019a) which filters out examples where the targets contain non-primitive types or user-defined functions. We extract the exact same types of limited expressions. This dataset contains 16k/8k/3k train/dev/test examples.
Detailed statistics of all datasets are provided in Appendix A.
Metrics We follow Brockschmidt et al. (2019a) and report exact match accuracy at 1 and 5. We also introduce a new tree@k metric, which counts a prediction as correct if the entire tree structure, ignoring leaf values, is identical to the tree of the ground truth. For example, the expressions x > 1 and y > 2 would not count as identical in exact match, but would count as “tree-match identical” because both express that an identifier is greater than an integer (NAME > INT). tree@k is interesting because it allows us to tease apart the model’s syntactic errors from incorrect subtoken predictions.
4.2 BASELINES
We compare our model to a variety of original implementations and adaptations of existing models. We put significant effort to perform a fair comparison, including adding a copy mechanism to the NMT baselines and subtokenization as in our model. We adapt strong baselines from the literature to our task, even if they were designed to different tasks such as NL→code and code→NL. We re-train all the following baselines on the same datasets as our model.
Neural Machine Translation We use standard autoregressive sequence-to-sequence NMT baselines, in which we subtokenize the given code snippet, replace the target in the source with a special PRED symbol, and train the network to predict the target as a sequence of subtokens. Transformerbase+copy (Vaswani et al., 2017) uses the implementation of OpenNMT (Klein et al., 2017) with a copy mechanism (Gu et al., 2016). Transformer-small+copy uses dmodel = 256, dff = 1024, and 4 self attention heads per layer. BiLSTM→LSTM+copy is a d = 512 2-layer bidirectional LSTM encoder-decoder with attention (Luong et al., 2015). seq2tree+copy follows Aharoni and Goldberg (2017) and learns to generate the linearized, subtokenized target AST, with the same architecture as BiLSTM→LSTM+copy. Java-specific Baselines We used the original implementation of Iyer et al. (2018), and also their seq2prod baseline which is a re-implementation of Yin and Neubig (2017); these are designed for NL→code tasks, in which we feed the (sub)tokenized code context as the NL input. The model of Iyer et al. (2018) is designed to get additional input of the available variables and their types, for which we do not feed types. While in theory these models could also be applied to other languages, their implementation only supports Java.
C#-specific Baselines We compare our model to GNN→NAG using the original implementation of Brockschmidt et al. (2019a) which contains additional improvements using ideas from Cvitkovic et al. (2019). Bielik et al. (2016) kindly trained and tested their non-neural PHOG model on our C# dataset. We note that PHOG does not have an explicit copy mechanism, and considers only context to the left of the target code, while we consider also context to the right. Extending PHOG to use copying and considering more context could potentially improve its results.
In both Java and C#, we compare to code2seq (Alon et al., 2019a), which is a strong code→NL model and train it to generate the target code as a sequence of subtokens.
4.3 IMPLEMENTATION AND HYPERPARAMETER SETTINGS
Architecture We use embeddings of size 512, 2 layers of LSTMs with 256 units, and 4 transformer layers with 8 attention heads. We kept a subtoken vocabulary of size 1,000 to encourage the model to learn to copy; larger vocabularies did not show an improvement. These resulted in a very lightweight model of only 15M trainable parameters, which is close to Transformer-small (11.8M parameters). In comparison, the Transformer-base model had more than 45M trainable parameters.
Training We train the model end-to-end using the cross entropy objective and the Adam optimizer (Kingma and Ba, 2014), an initial learning rate of 10−4 decayed by a factor of 0.95 every 20k steps. We bucket examples based on the number of predictions in the target subtree (nodes + subtokens + EOS symbols), and vary the batch size such that each batch contains about 512 targets. We train the model to prefer copying entire tokens rather than copying subtokens, if possible. We apply dropout of 0.25 in the Transformer layers, and a recurrent dropout of 0.5 in the LSTMs.
Inference We perform beam search with width of 5, and optimize for accuracy@1.
Ablation acc@1 acc@5
Paths→Seq 12.95 18.52 Seq→Path 12.12 17.12 Paths→Paths 17.63 24.62 No Root Att 14.43 18.48 No Copy 10.72 15.70
SLM (original model) 18.04 24.83
Table 3: Ablations on AnyC2C in Java.
5 RESULTS
Any-Code Generation: Java Table 1 shows that our SLM achieves over 1.1% and 0.78% better acc@1 and acc@5 (respectively) over the two strongest baselines. The improvement over Transformer-small, which is closer to our model in the number of parameters, is even higher: over 3.8% and 3.4% in acc@1 and acc@5.
In general, the NMT baselines performed better than code-specific baselines. We hypothesize that the reason is that the NMT baselines are more generic, while the code-specific baselines are designed for different tasks: seq2prod is designed for tasks which involve generating code given natural language input; Iyer et al. (2018) additionally expects all member methods, variables, and their types as input; code2seq is designed to generate sequences rather than code, and does not have a copy mechanism. An approximation of code2seq with a copy mechanism is presented in Section 6.
Interestingly, the syntactically-informed seq2tree baseline achieved the highest tree@k among the baselines, while our model achieved higher acc@k and tree@k. This shows that leveraging the syntax can be beneficial in NMT baselines as well.
Restricted Code Generation (RestrictC2C): C# Table 2 shows the results for the RestrictC2C task in C#, where seq2seq+copy is the BiLSTM→LSTM+copy model which performed the best among the Java baselines. We first observe that the seq2seq+copy and the seq2tree+copy baselines outperform the GNN→NAG of Brockschmidt et al. (2019a), who introduced this task. Although Brockschmidt et al. (2019a) did compare to a seq2seq baseline, their GNN→NAG model could copy symbols from the context, but their baseline did not. To conduct a fair comparison with our SLM model, we equip the seq2seq and seq2tree baselines with a copy mechanism. Even though the seq2seq+copy and the seq2tree+copy baselines perform substantially better than the state of the art in this setting, our SLM model is able to go beyond, achieving significant gains over all models.
Examples for predictions made by our model and baselines can be found in Appendices D and E.
6 ABLATION STUDY
To understand the importance of the various components and design decisions in our model, we conducted an extensive ablation study on the AnyC2C task in Java.
Paths→Seq follows code2seq (Alon et al., 2019a) and separates the model to an encoder and a decoder, where the decoder generates the target subtree as a sequence of subtokens. The main difference from code2seq is that Paths→Seq includes a copy mechanism, as in our SLM model. Seq→Path follows Rabinovich et al. (2017) and separates the model to an encoder and a decoder (including a copy mechanism), where the encoder encodes the context as a sequence of subtokens using a BiLSTM, and the decoder uses only the root path and the index of the generated child.
Paths→Paths uses separate encoder and decoder which both are AST-path based. These encoder and decoder have different parameters, unlike our SLM model which models the context and the prediction using the same components.
No Root Attention uses max pooling instead of attention in aggregating multiple paths (see Section 3.2). The index-informed path from the root to the target’s parent (R in Figure 2) is concatenated with the result, instead of being used as attention query.
No Copy replaces copy mechanism with a much larger vocabulary (25k subtokens instead of 1k).
Table 3 shows the results of these alternatives. The significantly lower results of Paths→Seq and Seq→Path show the great benefit of using a unified structural language model, instead of separate encoder and decoder components. While this separation between encoders and decoders might be necessary in semantic parsing (Rabinovich et al., 2017; Dong and Lapata, 2018), NL→code (Yin and Neubig, 2017) and code→NL (Alon et al., 2019a; Fernandes et al., 2019) tasks because of the different modalities of the input and the output, this separation may hurt performance when the output is essentially a missing part of the input’s AST. As expected, Paths→Seq performs better than code2seq (Table 1), as it includes a copy mechanism and code2seq does not.
As SLM performs better than Paths→Paths, this ablation shows the importance of joint modeling of the context and the target subtree by parameter tying. Each of Paths→Paths and the seq2seq baselines (Table 1) performs better than Paths→Seq and Seq→Path; this shows the importance of using the same type of encoder and decoder for the AnyC2C task, rather than combining “an optimal encoder” with “an optimal decoder”. Paths→Paths performs better than the seq2seq baselines (Table 1), showing the advantage of using paths over textual sequences, even without parameter tying.
No Root Attention degrades acc@1 and acc@5 by 3.6% to 6.3%. This shows that dynamically attending to the context paths given the current root path is crucial, even though the root path is necessarily included as a sub-path of other paths in the set St which go through self-attention. Not using a copying mechanism results in a degradation of 7.3% to 9.1%. Programs use symbols and identifiers repetitively, thus the ability to copy symbols from the context is crucial for this task. For this reason, we included a copying mechanism in all NMT baselines in Section 4.
protected void checkRpcAdminAccess() throws IOException, AccessControlException {
UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); UserGroupInformation zkfcUgi = UserGroupInformation.getLoginUser(); if (adminAcl.isUserAllowed(ugi) ||
ugi.getShortUserName().equals( zkfcUgi.getShortUserName() )) {
LOG.info("Allowed RPC access from " + ugi + " at " + Server.getRemoteAddress());
return; }
String msg = "Disallowed RPC access from " + ugi + " at " + Server.getRemoteAddress() + ". Not listed in " + DFSConfigKeys.DFS_ADMIN; LOG.warn(msg); throw new AccessControlException(msg);
}
True ref: zkfcUgi.getShortUserName()
SLM top-5 candidates: zkfcUgi.getShortUserName() (11.7%) (exact match) DFSConfigKeys.DFS (4.5%) zkfcUgi.getUserName() (2.6%) (tree-match) zkfcUgi.getUser() (1.7%) (tree-match) zkfcUgi.getUserId() (0.6%) (tree-match)
Entirely copied tokens are marked in brown; unknown copied subtokens are marked in blue; in-vocabulary subtokens are marked in black; subtokens that are both in-vocabulary and copied from context are marked in purple.
Figure 5: A Java AnyC2C example from our test set along with the predictions of our model. The predictions of the baselines are shown in Figure 7 in Appendix D.
7 QUALITATIVE ANALYSIS
7.1 CORRECT TREE, INCORRECT IDENTIFIER ASSIGNMENT
As shown in Section 5, there is a gap between acc@k and tree@k across all models: when ignoring identifier values and considering only the tree structure, the accuracy is significantly higher. Our SLM model performs better than all baselines in acc@k (Table 1); further, our model also shows a greater potential for improvement in its tree@k results which are much higher than the baselines.
We focus on the studying the cases where the tree structure was predicted correctly, but the model failed to generate the code exactly including names. Figure 4 shows a representative example for this case: the ground truth event.getTaskId() was predicted correctly only as the fifth candidate; nevertheless, all top-5 candidates are a “tree-match” since all of them express a method which is invoked on an object without arguments, of the form: NAME.NAME(). Generating the correct method name [get,task,id] is very difficult in this case, since neither getTaskId nor TaskId appear in the context and there is no apparent hint for them.
7.2 USEFULNESS OF COPY MECHANISM
As shown in Section 6, the ability to copy is crucial for the AnyC2C task, because of the repetitive use of identifiers and symbols in programs. Figure 5 shows a representative example for the necessity of the copy mechanism: generating the ground truth zkfcUgi.getShortUserName() is feasible only thanks to the copy mechanism, since zkfc is obviously an UNK subtoken which was not observed in the training data.
In this case, since both zkfcUgi and getShortUserName appear in context, both were copied as entire tokens, rather than generated using subtokens. This example also shows how the ability to copy entire tokens ease the generation process by reducing the number of target symbols (our SLM model is able to copy and combine single subtokens as well).
8 RELATED WORK
Generalizing Previous Approaches Our approach frames code generation as predicting the next node in all partial AST paths. This simple framing generalizes most previous work, without handcrafted special edges and actions:
• All models that use information about ancestor nodes only (Rabinovich et al., 2017; Maddison and Tarlow, 2014), as well as the “Parent Feeding” of Yin and Neubig (2017), are generalized by our model, since all paths that go into a node at pass through its parent, and the path from the rootRt (Figure 2) is used as the attention query. • The “previous action encoding” of Yin and Neubig (2017) is also a special case of our approach, because St contains the paths starting from the previously expanded leaves of Ap into the currently expanded node π (at), such as path3 in Figure 2(e). • The “context node” of PHOG (Bielik et al., 2016) is just one of the previously-traversed leaf nodes in a<t. Thus, not only that our model conditions on this context node as well, our model also takes into account the syntactic relation, i.e., the path, between the context and π (at). Moreover, while PHOG conditions on a single leaf, SLMs condition on every leaf in a<t. • Finally, Brockschmidt et al. (2019a) define special graph edges (e.g., “NextSib” and “Child”) to capture relations on the AST. Most of these relations can be expressed as partial AST paths.
Program Generation Learning to generate programs is one of the oldest problems in machine learning (Waldinger and Lee, 1969) and has been considered by some as the “holy grail of computer science” (Pnueli and Rosner, 1989; Gulwani et al., 2017). Typically, the task is to generate a program given some form of input or context, such as complete formal specifications (Green, 1981) or inputoutput examples (Gulwani, 2011; Devlin et al., 2017; Parisotto et al., 2017). While these approaches work well in some cases, they are bounded to DSLs that prevent them from being applied to realistic, general-purpose code. Maddison and Tarlow (2014) and Amodio et al. (2017) generate supposedly general-purpose code in a modern programming language, but do not deal with the challenge of fitting the code to a given context. Murali et al. (2017) generate code conditioned on a set of APIs; they state that their approach is thus intrinsically limited to generate API-heavy programs and is unusable for general, logical programs lacking external calls. Further, their generated programs are in an only ”Java-like” language. Yin and Neubig (2017), Iyer et al. (2018) and Rabinovich et al. (2017) generated general-purpose code as well, but for another task of generating code given natural language description. Yin et al. (2019) generated “any code” given an encoded edit that needs to be applied to a given code snippet.
Other work used datasets that are either small (Ling et al., 2016), containing highly aligned examples (Oda et al., 2015; Chen et al., 2018), limited-purpose languages like SQL (Yu et al., 2018), or general-purpose but containing eminently templated programs (Ling et al., 2016). Brockschmidt et al. (2019a) limit their model to generate only expressions of primitive types or arrays of these; use a closed vocabulary; and ignore expressions containing user-defined functions, because function names are hardcoded in their syntax production rules. In this paper, we lift these constraints and allow any, general-purpose, generation of code, of all types and containing any names. Our work is also related to Habash (2004), who used structural n-grams over dependency trees for statistical machine translation (SMT).
9 CONCLUSION
We presented a novel approach for generating code given surrounding context: computing the probability of an AST using a structural language model. We show that our approach generalizes most previous work in this area, while reaching state-of-the-art performance on of challenging benchmarks. We are eager to see future work advance SLMs further, and apply them to other real-life coding applications as well as other structured-data domains.
A DATA STATISTICS
Table 4 shows some statistics of our used datasets. In Java: for the validation set, we randomly sampled 10, 000 examples from the raw dev set; for the test set, we randomly sampled 20, 000 examples from the raw test set.
B CODE GENERATION PSEUDOCODE
Algorithm 1 shows the pseudocode for our depth-first, left-to-right, code generation approach. We keep a stack (line 1) which is initialized with an initial node to expand. We loop while the stack is not empty (line 3), and pop (line 4) the next node to expand at each step. If the node to expand is a nonterminal, we predict a child node (line 7). If the node is a terminal to a subtoken, we predict a subtoken from a vocabulary or copy (line 7). If the predicted child node, whether AST node or subtoken, can be further expanded (line 10), it is pushed back to the stack (line 12).
C COPYING SINGLE SUBTOKENS
In addition to scoring the entire token to be copied, we also score each of the subtokens composing it according to their position. For each position i, we add a scoring function scopyi , such that scopyi (`) produces the copying score of the i’th subtoken of `, which we denote as `i:
sw = sgen (w) + ∑
val(`)=w scopy token (`) + ∑ i ∑ val(`i)=w scopyi (`) (8)
Pr (a|S) = softmax (s) (9)
Where scopy token is the scoring function of copying the entire token, described in Section 3.3.
For example, a token of getX is scored entirely using scopy token; each of its subtokens, get and X, are scored using scopy1 and scopy2 respectively. That is, the model can either copy the entire token, or copy only some of its subtokens. This ability is especially useful in generating a name like setX, where getX appears in the context, and X is any unknown, user-defined, subtoken; the model learns to generate set from the vocabulary, and copy only the subtoken X.
D JAVA EXAMPLES
Figures 5-12 contain examples from our test set for the AnyC2C task in Java, along with the prediction of our model and some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons.
E C# EXAMPLES
Figures 13-20 contain examples from our test set for the RestrictC2C task in C# along with the prediction of our model some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons. | 1. What is the main contribution of the paper regarding programming code generation?
2. How does the proposed method generate the AST corresponding to the program?
3. Why does the method represent the program by generating several sequential paths instead of preserving the AST structure?
4. How do the results of the paper compare to previous works in terms of improvement?
5. What is the significance of the ablation studies conducted in the paper?
6. Why did the reviewer find the elimination of methods with over 20 lines of code problematic?
7. How practical and useful is the proposed task for real-world applications in AI4code? | Review | Review
This paper proposes a generative task for programming code where an expression from the program is generated given the rest of the program (minus the expression). This is in line with language modeling for natural language. The proposed method generates the AST corresponding to the program by generating one node at the time for the missing/to-be-generated expression by approximating the probabilities of the generated notes. Again, this is similar to the prediction of words in language models.
The method takes into account the AST corresponding to the program. However, when representing the program, the structure of the AST is not preserved, instead, the AST is represented by generating several sequential paths by traversing paths between connected nodes in the tree.
It would be nice if the paper provided some intuition why generating such connecting paths in the tree are relevant for representing the code, specially for nodes that do not have a direct relationship between them (e.g., the nodes are distant enough in the code that the corresponding probabilities of their nodes do not seem/appear related).
The paper presents results for 2 datasets (comparing with various related work methods). The results for the Java dataset improve state of the art by 1-2%, while the results for the restricted C# dataset show a much more significant improvement (in the order of 10-15% improvement, depending on the metric).
I would have liked to see a qualitative analysis of the results. In particular, I would have liked to understand how the predictions differ between acc and tree metrics. In other words, when the prediction looking at the tree structure is correct and the overall prediction is not, what goes wrong?
It was not clear to me why or if all the paths between 2 nodes are necessary when encoding the partial AST and predicting the missing nodes. I was not convinced that the ablation studies were relevant. I would have liked to see ablation studies that considered a subset of the paths in the graph.
The elimination of the methods with more than 20 lines of code seems ad-hoc to me and biases the evaluation with relatively short methods (how many methods were eliminated this way?).
One thing that I struggle with is understanding how useful the proposed task is and how it can be generalized/used in practice for some relevant higher level task in AI4code. |
ICLR | Title
Structural Language Models for Any-Code Generation
Abstract
We address the problem of Any-Code-to-Code Generation (AnyC2C) – generating code given its surrounding code without any restriction on the vocabulary or structure. The state-of-the-art in this problem is the sequence-to-sequence (seq2seq) approach, which treats code as a sequence and does not leverage any structural information. We introduce a new approach to AnyC2C that leverages the strict syntax of programming languages to model a code snippet as a tree – structural language modeling (SLM). SLM estimates the probability of the program’s abstract syntax tree (AST) by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. Unlike previous structural techniques that have severely restricted the kinds of expressions that can be generated in this task , our approach can generate arbitrary expressions in any programming language. Our model significantly outperforms both seq2seq and a variety of existing structured approaches in generating Java and C# code. We make our code, datasets, and models available online.
1 INTRODUCTION
Generating source code requires reasoning over an unbounded number of syntactic structures and potential symbols. Previous approaches have avoided this issue by limiting the generation problem: program synthesis approaches (Manna and Waldinger, 1971) are tailored to domain-specific languages (Gulwani, 2011), semantic parsing approaches focus on highly templated datasets (Ling et al., 2016; Shin et al., 2019) or SQL (Yu et al., 2018; Dong and Lapata, 2018), while other recent approaches generate code in general languages like Java and C#, but severely restrict the syntax, vocabulary or nature of the generated expressions (Murali et al., 2017; Brockschmidt et al., 2019a).
We introduce the task of Any-Code-to-Code Generation (AnyC2C) – generating source code in a general-purpose programming language without any restriction on its vocabulary or structure. Specifically, we focus on generating code in context: given a program P and some part of the program p, predict p from the rest of the program P−=P\p. The only restriction we place is that the target pmust have a valid subtree within the program’s abstract syntax tree (AST). AnyC2C thus generalizes the restricted expression generation task of Brockschmidt et al. (2019a), where target code contains only primitive types and excludes user-defined functions. Figure 1 shows two such AnyC2C examples.
While a sequence-to-sequence (seq2seq) model with a copy mechanism works better than existing code generation approaches on AnyC2C (see Section 5), it ignores the structural information available from the code’s AST. We present a new approach that explicitly leverages the strict syntax of programming languages to model code snippets as trees – structural language modeling (SLM). SLM estimates the probability of the program’s AST by decomposing it into a product of conditional probabilities over its nodes. We present a neural model that computes these conditional probabilities by considering all AST paths leading to a target node. While prior work uses AST paths to read programs (Alon et al., 2019a), we generate code by producing the target AST node-by-node.
We evaluate SLM on Java AnyC2C benchmarks, where our model achieves a new state-of-the-art exact-match accuracy of 18.04% (previous SOTA: 16.93%). SLMs also outperform existing models on the restricted expression generation task of Brockschmidt et al. (2019a) in C# by a wide margin, 37.61% compared to 26.42%. Our ablation study reveals the importance of using AST paths for
both reading and generating code. Finally, we discuss the theoretical advantages of SLMs, and show how they generalize many previous structural approaches for code generation.
2 CODE GENERATION AS STRUCTURAL LANGUAGE MODELING
We model the task of Any-Code-to-Code Generation (AnyC2C) by computing the probability of a program Pr (P), similar to how a language model computes the probability of a natural language sentence. While language models typically assume a sequence as their input, our input is an abstract syntax tree AP . We thus introduce a structural language modeling approach (SLM) for code generation.
We first show a chain-rule decomposition of the tree’s probability Pr (AP) into a product of conditional node probabilities, and then describe our path-based model for computing the individual conditional probabilities. We explain how to construct a tree from local node predictions, and finally discuss how our approach differs from previous work on production-based tree generation.
Representing Code as a Tree A program P is a sequence of tokens that is unambiguously equivalent to an abstract syntax tree AP , where each node represents an element in the language (e.g. conditions, loops, variable declarations) from a set T . The AST’s leaves (terminals) have an additional user-defined value v ∈ V . Nonterminal nodes can have a varying number of children nodes. Decomposing the Probability of a Tree Given a tree AP , we first traverse the tree, depth-first,1 to induce an ordering over its nodes a0, . . . , a|AP | ∈ AP . We can now decompose the probability of a tree Pr (AP) using the chain rule, akin to the standard approach in language modeling:
Pr (AP) = ∏ t Pr (at|a<t) (1)
where a<t are all the nodes that were traversed before at.
In AnyC2C, part of the tree (AP− ) is already observed. Therefore, we order the nodes of AP− before the nodes of the target p, and compute only the conditional probabilities over the nodes in p, essentially conditioning on the observed tree AP− . Representing Partial Trees via Paths How can we represent the partial tree composed of a<t when computing Pr (at|a<t)? In regular language modeling, the structure is linear, and a<t is a sequence. One way to represent a partial tree is to linearize it according to the traversal order (Xiao et al., 2016); however, this could create artificially long distances between the current node at and ancestor nodes (e.g., the root a0). Another option is to use only the path from the root node to at (Rabinovich et al., 2017), but this ignores a lot of contextual information (e.g., sibling nodes).
We follow Alon et al. (2018) and use the set of paths from every leaf to the current node to expand, as well as the path Rt originating from the root. We denote the (candidate) node at time t as at, its (given) parent, which is the currently expanded node, by π (at), and the set of all paths as St:
St = {` π (at) |` ∈ leaves (a<t)} ∪ {a0 π (at)} (2) 1Depth-first ordering is common practice in tree generation (Maddison and Tarlow, 2014; Raychev et al.,
2016; Brockschmidt et al., 2019a), but our framework also allows for other orderings, in theory.
model generates the next node (denoted by a question mark: ? ) of path1, path2 and path3 using the root pathR. Dashed lines denote AST parent-child relations; solid lines denote AST paths.
where ` π (at) is the (only) path in the tree between a leaf ` and the current node to expand π (at), and Rt = a0 π (at) is the path from the root of the program to π (at), which represents the current, relative position of π (at) in the program (marked as R in Figure 2). Whereas prior work used whole paths (between two leaf nodes) to encode an AST (Alon et al., 2019b;a), our model observes partial paths (between a leaf and an intermediate node) and learns to extend them.
Figure 2 illustrates the traversal order of a subtree that represents the expression x > 1, as well as some of the paths used to compute the probability at each step. At each step, the probability of the next node is computed given the paths St from the root and every given leaf up to the current node to expand. Figure 2(d) shows how after the terminal node Name and its value x are given, path3 originating from this leaf is also used to compute the probability of the next nodes.
Our path-based approach generalizes previous approaches, such as parent feeding (Yin and Neubig, 2017), previous action encoding (Yin and Neubig, 2017), context nodes (Bielik et al., 2016), and some of the graph-edges of Brockschmidt et al. (2019a). See Section 8 for further discussion.
Generating Trees In sequence generation, the length of the generated sequence is controlled by generating a single EOS token to stop. When generating trees, we require a more sophisticated mechanism to control arity and depth. We augment AP in two ways to allow node-by-node generation. First, we add a special EOSnode node to every nonterminal to control for arity. Generating this node indicates that the parent node has no more children. Second, we decompose each terminal node nv into a sequence of terminal nodes Tv by splitting up the node’s value v into subtokens based on camel notation (Allamanis et al., 2015). For example, if v = toLowerCase, then Tv = to → lower → case → EOStok. We end each subtoken sequence with a special EOStok node to control for depth during generation. Figure 3 shows an example of both EOSnode and EOStok in action.
Node Trees vs. Production Trees While we predict a single node at each step, previous work (Iyer et al., 2018; Brockschmidt et al., 2019a) predicts a grammar production rule. This more direct grammatical representation decomposes the code in a way that often forces the model to predict with partial information. For instance, consider the expression str.Substring(3). The
model of Brockschmidt et al. (2019a) would first predict the rule Expr→Expr.Substring(Expr), and only then expand Expr→str and Expr→3; i.e., the model needs to predict the method name (Substring) before the invoking object (str). Further, the Substring method can get either one or two arguments, forcing the model to choose whether to use the one- or two-argument production rule in advance. Node generation, however, allows us to predict the presence of a function call and only then to predict its object, method name, and arguments, rather than predicting these a priori.
We note that there exist other approaches that generate an arbitrary number of child nodes with production rule-based models. For example, Rabinovich et al. (2017) used a “horizontal LSTM” to decide whether or not to generate another child; and Yin and Neubig (2018) presented a transition system with a “Reduce” action.
3 MODEL ARCHITECTURE
In the previous section, we described how we can generate code given the probabilities Pr (at|a<t), where a<t is represented by the set of partial AST paths St. Here, we present a neural model that estimates Pr (at|St). We first apply an LSTM-based path encoder to represent each path in St as a vector (Section 3.1). We then contextualize and aggregate the entire set into a single vector (Section 3.2). Finally, we predict the target node at by combining a limited output vocabulary with a syntactic copy mechanism (Section 3.3).
3.1 ENCODING AST PATHS
Given a partial AST path (node sequence n1, . . . , nk), our goal is to create a vector representation.
We first represent each node ni using embeddings. A subtoken node is represented by the index of its subtoken w in the embedding matrix Esubtoken; AST nodes are represented as a pair ni = (τ, κ) where τ is the node type, e.g. IfStatement, and κ is the node index among its sibling nodes. We represent node types using a learned embedding matrix Etype and the child indices using a learned matrix Eindex. The node’s vector representation is the concatenation of the type and index vectors.
e (ni) = { Esubtokenw ni is the subtoken w[ Etypeτ ;E index κ ] ni is the AST node (τ, κ)
(3)
We encode the entire path using a uni-directional LSTM stack, and take the final states:2
f (n1, . . . , nk) = LSTM(e (n1) , . . . , e (nk))
Given a set of partial paths S (omitting the iterator t for simplicity), we denote their encodings as H = { f (n1, . . . , nk) | (n1, . . . , nk) ∈ S}.
Efficient Computation When processing an entire tree, there are large overlaps between paths from different time steps. In particular, paths that originate from the same leaf share the same prefix. We therefore apply the LSTM on the prefix once, and cache the state across suffixes, speeding up both training and inference significantly. An example is shown in Figure 6 in the Appendix.
3.2 AGGREGATING MULTIPLE PATHS
Given the set of paths S leading up to the target node’s parent π(a), our goal is to represent S as a vector, in the context of predicting a. To do so, we introduce the aggregation function g (H, r, i). As its input, g takes the set of encoded paths H , the encoded root path r, and the child index i of the currently predicted child node a relatively to its parent.
We first contextualize the path encodings H using a transformer encoder (Vaswani et al., 2017).3 In parallel, we apply a non-linear transformation to the encoding of the root path r = f (R), in order to inform it that we wish to predict the i-th child of π(at):
Z = Transformer (H) r̃ =Wa · ReLU (Ci · r) (4) 2Replacing the LSTMs with transformers yielded similar results in preliminary experiments. 3Since H is a set, we do not use positional embeddings.
In this formulation, the parameter matrix Ci is used when the child index is i, while the parameter matrix Wa is used for every instance.
We then compute attention over the set of contextualized path encodings Z with the index-informed root-path encoding r̃ as the query, pass the weighted average z̃ and the root-path encoding r̃ through another fully-connected layer and denote the resulting vector representation as h̃:
α = softmax (Z · r̃) z̃ = ∑ j αj · Zj h̃ = g (H, r, i) = ReLU (Wg [z̃; r̃]) (5)
3.3 PREDICTING WITH A SYNTACTIC COPY MECHANISM
We can now predict a from the representation h̃. If the target node’s parent π(a) is a nonterminal AST node, then a must be an AST node; otherwise, a is a subtoken.
Predicting AST Nodes We predict a using a softmax over the node type embeddings Etype: Pr (a|S) = softmax ( Etype · h̃ ) (π(a) is a nonterminal) (6)
Predicting Subtokens Programs repeatedly refer to previously declared symbols, resulting in highly repetitive usage of identifiers. We therefore add a copy mechanism (Gu et al., 2016) to allow our model to predict either entire tokens or subtokens that already exist in the context. As we show in Section 6, copying greatly improves our model’s performance. For brevity, we describe how entire tokens are copied, and elaborate on the copy of subtokens in Appendix C. We score each leaf ` using a bilinear function (Wc) between its path’s encodingH` and h̃. At the same time, we score the token w, which is the token associated with `, from a limited vocabulary using the inner product between its representation in the subtoken embedding matrix Esubtoken and h̃.
scopy (`) = H` ·Wc · h̃ sgen (w) = Esubtokenw · h̃ (7)
The scores scopy and sgen are then summed over different occurrences that correspond to the same symbol, and subsequently normalized via softmax. A key difference from previous work (Gu et al., 2016; Yin and Neubig, 2017) is that our copy mechanism uses the syntactic relation between the source and the target (AST path), rather than their sequential relation. Yin et al. (2019) proposed a graph-based copying mechanism that is capable of copying both tokens and subtrees from the context.
4 EXPERIMENTAL SETUP
4.1 BENCHMARKS
Any-Code Generation (AnyC2C): Java We take the Java-small dataset of Alon et al. (2019a), which contains 11 GitHub projects, broken down to a single method per example, and split to train/dev/test by project to reduce code overlap. This dataset was found to contain the least code duplication by Allamanis (2018b). We create AnyC2C examples by selecting every expression larger than a single AST node as the target, using the remainder of the method as the context. We remove methods that contain the word “test” in their body or file name, and remove methods longer than 20 lines to avoid auto-generated code. To make the task even harder, we remove examples where the target subtree appear as-is in the context. This dataset contains 1.3M/10k/20k train/dev/test examples. The average number of targets for our model is 10.8; for the seq2seq baselines the average is 7.8 targets; if we modeled our targets using production rules, the average would have been 7.9.
Restrict Code Generation (RestrictC2C): C# Since the restricted expression generation (RestrictC2C) dataset of Brockschmidt et al. (2019a) is not publicly available, we consulted with Brockschmidt et al. directly and use the dataset of Allamanis et al. (2018a). This dataset contains 30 GitHub projects broken down to one method per example, and use the “unseen projects test” split. To create RestrictC2C examples, we use the code of Brockschmidt et al. (2019a) which filters out examples where the targets contain non-primitive types or user-defined functions. We extract the exact same types of limited expressions. This dataset contains 16k/8k/3k train/dev/test examples.
Detailed statistics of all datasets are provided in Appendix A.
Metrics We follow Brockschmidt et al. (2019a) and report exact match accuracy at 1 and 5. We also introduce a new tree@k metric, which counts a prediction as correct if the entire tree structure, ignoring leaf values, is identical to the tree of the ground truth. For example, the expressions x > 1 and y > 2 would not count as identical in exact match, but would count as “tree-match identical” because both express that an identifier is greater than an integer (NAME > INT). tree@k is interesting because it allows us to tease apart the model’s syntactic errors from incorrect subtoken predictions.
4.2 BASELINES
We compare our model to a variety of original implementations and adaptations of existing models. We put significant effort to perform a fair comparison, including adding a copy mechanism to the NMT baselines and subtokenization as in our model. We adapt strong baselines from the literature to our task, even if they were designed to different tasks such as NL→code and code→NL. We re-train all the following baselines on the same datasets as our model.
Neural Machine Translation We use standard autoregressive sequence-to-sequence NMT baselines, in which we subtokenize the given code snippet, replace the target in the source with a special PRED symbol, and train the network to predict the target as a sequence of subtokens. Transformerbase+copy (Vaswani et al., 2017) uses the implementation of OpenNMT (Klein et al., 2017) with a copy mechanism (Gu et al., 2016). Transformer-small+copy uses dmodel = 256, dff = 1024, and 4 self attention heads per layer. BiLSTM→LSTM+copy is a d = 512 2-layer bidirectional LSTM encoder-decoder with attention (Luong et al., 2015). seq2tree+copy follows Aharoni and Goldberg (2017) and learns to generate the linearized, subtokenized target AST, with the same architecture as BiLSTM→LSTM+copy. Java-specific Baselines We used the original implementation of Iyer et al. (2018), and also their seq2prod baseline which is a re-implementation of Yin and Neubig (2017); these are designed for NL→code tasks, in which we feed the (sub)tokenized code context as the NL input. The model of Iyer et al. (2018) is designed to get additional input of the available variables and their types, for which we do not feed types. While in theory these models could also be applied to other languages, their implementation only supports Java.
C#-specific Baselines We compare our model to GNN→NAG using the original implementation of Brockschmidt et al. (2019a) which contains additional improvements using ideas from Cvitkovic et al. (2019). Bielik et al. (2016) kindly trained and tested their non-neural PHOG model on our C# dataset. We note that PHOG does not have an explicit copy mechanism, and considers only context to the left of the target code, while we consider also context to the right. Extending PHOG to use copying and considering more context could potentially improve its results.
In both Java and C#, we compare to code2seq (Alon et al., 2019a), which is a strong code→NL model and train it to generate the target code as a sequence of subtokens.
4.3 IMPLEMENTATION AND HYPERPARAMETER SETTINGS
Architecture We use embeddings of size 512, 2 layers of LSTMs with 256 units, and 4 transformer layers with 8 attention heads. We kept a subtoken vocabulary of size 1,000 to encourage the model to learn to copy; larger vocabularies did not show an improvement. These resulted in a very lightweight model of only 15M trainable parameters, which is close to Transformer-small (11.8M parameters). In comparison, the Transformer-base model had more than 45M trainable parameters.
Training We train the model end-to-end using the cross entropy objective and the Adam optimizer (Kingma and Ba, 2014), an initial learning rate of 10−4 decayed by a factor of 0.95 every 20k steps. We bucket examples based on the number of predictions in the target subtree (nodes + subtokens + EOS symbols), and vary the batch size such that each batch contains about 512 targets. We train the model to prefer copying entire tokens rather than copying subtokens, if possible. We apply dropout of 0.25 in the Transformer layers, and a recurrent dropout of 0.5 in the LSTMs.
Inference We perform beam search with width of 5, and optimize for accuracy@1.
Ablation acc@1 acc@5
Paths→Seq 12.95 18.52 Seq→Path 12.12 17.12 Paths→Paths 17.63 24.62 No Root Att 14.43 18.48 No Copy 10.72 15.70
SLM (original model) 18.04 24.83
Table 3: Ablations on AnyC2C in Java.
5 RESULTS
Any-Code Generation: Java Table 1 shows that our SLM achieves over 1.1% and 0.78% better acc@1 and acc@5 (respectively) over the two strongest baselines. The improvement over Transformer-small, which is closer to our model in the number of parameters, is even higher: over 3.8% and 3.4% in acc@1 and acc@5.
In general, the NMT baselines performed better than code-specific baselines. We hypothesize that the reason is that the NMT baselines are more generic, while the code-specific baselines are designed for different tasks: seq2prod is designed for tasks which involve generating code given natural language input; Iyer et al. (2018) additionally expects all member methods, variables, and their types as input; code2seq is designed to generate sequences rather than code, and does not have a copy mechanism. An approximation of code2seq with a copy mechanism is presented in Section 6.
Interestingly, the syntactically-informed seq2tree baseline achieved the highest tree@k among the baselines, while our model achieved higher acc@k and tree@k. This shows that leveraging the syntax can be beneficial in NMT baselines as well.
Restricted Code Generation (RestrictC2C): C# Table 2 shows the results for the RestrictC2C task in C#, where seq2seq+copy is the BiLSTM→LSTM+copy model which performed the best among the Java baselines. We first observe that the seq2seq+copy and the seq2tree+copy baselines outperform the GNN→NAG of Brockschmidt et al. (2019a), who introduced this task. Although Brockschmidt et al. (2019a) did compare to a seq2seq baseline, their GNN→NAG model could copy symbols from the context, but their baseline did not. To conduct a fair comparison with our SLM model, we equip the seq2seq and seq2tree baselines with a copy mechanism. Even though the seq2seq+copy and the seq2tree+copy baselines perform substantially better than the state of the art in this setting, our SLM model is able to go beyond, achieving significant gains over all models.
Examples for predictions made by our model and baselines can be found in Appendices D and E.
6 ABLATION STUDY
To understand the importance of the various components and design decisions in our model, we conducted an extensive ablation study on the AnyC2C task in Java.
Paths→Seq follows code2seq (Alon et al., 2019a) and separates the model to an encoder and a decoder, where the decoder generates the target subtree as a sequence of subtokens. The main difference from code2seq is that Paths→Seq includes a copy mechanism, as in our SLM model. Seq→Path follows Rabinovich et al. (2017) and separates the model to an encoder and a decoder (including a copy mechanism), where the encoder encodes the context as a sequence of subtokens using a BiLSTM, and the decoder uses only the root path and the index of the generated child.
Paths→Paths uses separate encoder and decoder which both are AST-path based. These encoder and decoder have different parameters, unlike our SLM model which models the context and the prediction using the same components.
No Root Attention uses max pooling instead of attention in aggregating multiple paths (see Section 3.2). The index-informed path from the root to the target’s parent (R in Figure 2) is concatenated with the result, instead of being used as attention query.
No Copy replaces copy mechanism with a much larger vocabulary (25k subtokens instead of 1k).
Table 3 shows the results of these alternatives. The significantly lower results of Paths→Seq and Seq→Path show the great benefit of using a unified structural language model, instead of separate encoder and decoder components. While this separation between encoders and decoders might be necessary in semantic parsing (Rabinovich et al., 2017; Dong and Lapata, 2018), NL→code (Yin and Neubig, 2017) and code→NL (Alon et al., 2019a; Fernandes et al., 2019) tasks because of the different modalities of the input and the output, this separation may hurt performance when the output is essentially a missing part of the input’s AST. As expected, Paths→Seq performs better than code2seq (Table 1), as it includes a copy mechanism and code2seq does not.
As SLM performs better than Paths→Paths, this ablation shows the importance of joint modeling of the context and the target subtree by parameter tying. Each of Paths→Paths and the seq2seq baselines (Table 1) performs better than Paths→Seq and Seq→Path; this shows the importance of using the same type of encoder and decoder for the AnyC2C task, rather than combining “an optimal encoder” with “an optimal decoder”. Paths→Paths performs better than the seq2seq baselines (Table 1), showing the advantage of using paths over textual sequences, even without parameter tying.
No Root Attention degrades acc@1 and acc@5 by 3.6% to 6.3%. This shows that dynamically attending to the context paths given the current root path is crucial, even though the root path is necessarily included as a sub-path of other paths in the set St which go through self-attention. Not using a copying mechanism results in a degradation of 7.3% to 9.1%. Programs use symbols and identifiers repetitively, thus the ability to copy symbols from the context is crucial for this task. For this reason, we included a copying mechanism in all NMT baselines in Section 4.
protected void checkRpcAdminAccess() throws IOException, AccessControlException {
UserGroupInformation ugi = UserGroupInformation.getCurrentUser(); UserGroupInformation zkfcUgi = UserGroupInformation.getLoginUser(); if (adminAcl.isUserAllowed(ugi) ||
ugi.getShortUserName().equals( zkfcUgi.getShortUserName() )) {
LOG.info("Allowed RPC access from " + ugi + " at " + Server.getRemoteAddress());
return; }
String msg = "Disallowed RPC access from " + ugi + " at " + Server.getRemoteAddress() + ". Not listed in " + DFSConfigKeys.DFS_ADMIN; LOG.warn(msg); throw new AccessControlException(msg);
}
True ref: zkfcUgi.getShortUserName()
SLM top-5 candidates: zkfcUgi.getShortUserName() (11.7%) (exact match) DFSConfigKeys.DFS (4.5%) zkfcUgi.getUserName() (2.6%) (tree-match) zkfcUgi.getUser() (1.7%) (tree-match) zkfcUgi.getUserId() (0.6%) (tree-match)
Entirely copied tokens are marked in brown; unknown copied subtokens are marked in blue; in-vocabulary subtokens are marked in black; subtokens that are both in-vocabulary and copied from context are marked in purple.
Figure 5: A Java AnyC2C example from our test set along with the predictions of our model. The predictions of the baselines are shown in Figure 7 in Appendix D.
7 QUALITATIVE ANALYSIS
7.1 CORRECT TREE, INCORRECT IDENTIFIER ASSIGNMENT
As shown in Section 5, there is a gap between acc@k and tree@k across all models: when ignoring identifier values and considering only the tree structure, the accuracy is significantly higher. Our SLM model performs better than all baselines in acc@k (Table 1); further, our model also shows a greater potential for improvement in its tree@k results which are much higher than the baselines.
We focus on the studying the cases where the tree structure was predicted correctly, but the model failed to generate the code exactly including names. Figure 4 shows a representative example for this case: the ground truth event.getTaskId() was predicted correctly only as the fifth candidate; nevertheless, all top-5 candidates are a “tree-match” since all of them express a method which is invoked on an object without arguments, of the form: NAME.NAME(). Generating the correct method name [get,task,id] is very difficult in this case, since neither getTaskId nor TaskId appear in the context and there is no apparent hint for them.
7.2 USEFULNESS OF COPY MECHANISM
As shown in Section 6, the ability to copy is crucial for the AnyC2C task, because of the repetitive use of identifiers and symbols in programs. Figure 5 shows a representative example for the necessity of the copy mechanism: generating the ground truth zkfcUgi.getShortUserName() is feasible only thanks to the copy mechanism, since zkfc is obviously an UNK subtoken which was not observed in the training data.
In this case, since both zkfcUgi and getShortUserName appear in context, both were copied as entire tokens, rather than generated using subtokens. This example also shows how the ability to copy entire tokens ease the generation process by reducing the number of target symbols (our SLM model is able to copy and combine single subtokens as well).
8 RELATED WORK
Generalizing Previous Approaches Our approach frames code generation as predicting the next node in all partial AST paths. This simple framing generalizes most previous work, without handcrafted special edges and actions:
• All models that use information about ancestor nodes only (Rabinovich et al., 2017; Maddison and Tarlow, 2014), as well as the “Parent Feeding” of Yin and Neubig (2017), are generalized by our model, since all paths that go into a node at pass through its parent, and the path from the rootRt (Figure 2) is used as the attention query. • The “previous action encoding” of Yin and Neubig (2017) is also a special case of our approach, because St contains the paths starting from the previously expanded leaves of Ap into the currently expanded node π (at), such as path3 in Figure 2(e). • The “context node” of PHOG (Bielik et al., 2016) is just one of the previously-traversed leaf nodes in a<t. Thus, not only that our model conditions on this context node as well, our model also takes into account the syntactic relation, i.e., the path, between the context and π (at). Moreover, while PHOG conditions on a single leaf, SLMs condition on every leaf in a<t. • Finally, Brockschmidt et al. (2019a) define special graph edges (e.g., “NextSib” and “Child”) to capture relations on the AST. Most of these relations can be expressed as partial AST paths.
Program Generation Learning to generate programs is one of the oldest problems in machine learning (Waldinger and Lee, 1969) and has been considered by some as the “holy grail of computer science” (Pnueli and Rosner, 1989; Gulwani et al., 2017). Typically, the task is to generate a program given some form of input or context, such as complete formal specifications (Green, 1981) or inputoutput examples (Gulwani, 2011; Devlin et al., 2017; Parisotto et al., 2017). While these approaches work well in some cases, they are bounded to DSLs that prevent them from being applied to realistic, general-purpose code. Maddison and Tarlow (2014) and Amodio et al. (2017) generate supposedly general-purpose code in a modern programming language, but do not deal with the challenge of fitting the code to a given context. Murali et al. (2017) generate code conditioned on a set of APIs; they state that their approach is thus intrinsically limited to generate API-heavy programs and is unusable for general, logical programs lacking external calls. Further, their generated programs are in an only ”Java-like” language. Yin and Neubig (2017), Iyer et al. (2018) and Rabinovich et al. (2017) generated general-purpose code as well, but for another task of generating code given natural language description. Yin et al. (2019) generated “any code” given an encoded edit that needs to be applied to a given code snippet.
Other work used datasets that are either small (Ling et al., 2016), containing highly aligned examples (Oda et al., 2015; Chen et al., 2018), limited-purpose languages like SQL (Yu et al., 2018), or general-purpose but containing eminently templated programs (Ling et al., 2016). Brockschmidt et al. (2019a) limit their model to generate only expressions of primitive types or arrays of these; use a closed vocabulary; and ignore expressions containing user-defined functions, because function names are hardcoded in their syntax production rules. In this paper, we lift these constraints and allow any, general-purpose, generation of code, of all types and containing any names. Our work is also related to Habash (2004), who used structural n-grams over dependency trees for statistical machine translation (SMT).
9 CONCLUSION
We presented a novel approach for generating code given surrounding context: computing the probability of an AST using a structural language model. We show that our approach generalizes most previous work in this area, while reaching state-of-the-art performance on of challenging benchmarks. We are eager to see future work advance SLMs further, and apply them to other real-life coding applications as well as other structured-data domains.
A DATA STATISTICS
Table 4 shows some statistics of our used datasets. In Java: for the validation set, we randomly sampled 10, 000 examples from the raw dev set; for the test set, we randomly sampled 20, 000 examples from the raw test set.
B CODE GENERATION PSEUDOCODE
Algorithm 1 shows the pseudocode for our depth-first, left-to-right, code generation approach. We keep a stack (line 1) which is initialized with an initial node to expand. We loop while the stack is not empty (line 3), and pop (line 4) the next node to expand at each step. If the node to expand is a nonterminal, we predict a child node (line 7). If the node is a terminal to a subtoken, we predict a subtoken from a vocabulary or copy (line 7). If the predicted child node, whether AST node or subtoken, can be further expanded (line 10), it is pushed back to the stack (line 12).
C COPYING SINGLE SUBTOKENS
In addition to scoring the entire token to be copied, we also score each of the subtokens composing it according to their position. For each position i, we add a scoring function scopyi , such that scopyi (`) produces the copying score of the i’th subtoken of `, which we denote as `i:
sw = sgen (w) + ∑
val(`)=w scopy token (`) + ∑ i ∑ val(`i)=w scopyi (`) (8)
Pr (a|S) = softmax (s) (9)
Where scopy token is the scoring function of copying the entire token, described in Section 3.3.
For example, a token of getX is scored entirely using scopy token; each of its subtokens, get and X, are scored using scopy1 and scopy2 respectively. That is, the model can either copy the entire token, or copy only some of its subtokens. This ability is especially useful in generating a name like setX, where getX appears in the context, and X is any unknown, user-defined, subtoken; the model learns to generate set from the vocabulary, and copy only the subtoken X.
D JAVA EXAMPLES
Figures 5-12 contain examples from our test set for the AnyC2C task in Java, along with the prediction of our model and some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons.
E C# EXAMPLES
Figures 13-20 contain examples from our test set for the RestrictC2C task in C# along with the prediction of our model some of the baselines. The highlighted expressions are the true references that should be generated. Indentation and line breaks may have been altered for typesetting reasons. | 1. What is the main contribution of the paper regarding grammar-based generation?
2. What are the strengths and weaknesses of the proposed path-based context encoding model?
3. How does the node-based tree generation model compare to production-based approaches?
4. What is the significance of the syntactic terminal token copy mechanism?
5. How do the ablation studies suggested in the review help to better understand the technical contributions of the paper?
6. What evidence could be presented to support the claim of the novelty of the AnyGen benchmark? | Review | Review
This paper presents a grammar-based generation approach for "slot-filling" style code generation tasks. Given a context AST with opening non-terminal node, the model completes the opening node by predicting a sequence of child nodes, which forms a sub-AST rooted at the original opening node. The proposed model encodes context ASTs using a path-based approach (Alon et al., 2019a), essentially generalizing the previous model of Alon et al., 2019a from a code-to-sequence setting (e.g., generating natural language comments from code) to a "code-to-code" setting (i.e., code completion given contextual snippets).
Strong Points:
* The paper is very well written. The idea of formalizing code completion as structured language modeling and extending Alon et al., 2019a for the task is natural and well executed, with strong models and significantly improved results on two code completion benchmarks for both Java and C#.
* The authors attempted to establish comparisons with most existing code generation models.
Detailed Review:
*Technical Contribution* I have a very mixed feeling with this paper, while the model registers high empirical performance, the technical contribution is a bit limited, as detailed below:
- *Path-based Context Encoding* The most important contribution in this submission is the application of path-based AST encoding model of Alon et al., 2019a to encode context (the given contextual and partially generated ASTs) for code generation. While the path-based encoding scheme is indeed a powerful model that intuitively encapsulates and generalizes over most previous approaches (Section 7), applying the model to a different task without significant task-specific adaptation or in-depth analysis might not sound technically novel. Meanwhile, the core idea of modeling/encoding the information flow in both the given context AST and partially generated programs for opening node expansion has already been explored in Brockschmidt et al. (2019a), albeit using a different encoding approach (GNNs) and in a relatively restricted setting of generating arithmetic expressions.
- *Node-based Tree Generation Model* Apart from the path-based context encoding model, the node-based generation model presented in Section 2 also seems interesting. However, it might take longer time-steps to generate the node sequence instead of the sequence of production rules (composed of multiple child nodes), which could make optimization and inference more difficult. On the other hand, to control arity, the node-based approach need to inject synthetic "EOS" nodes to signal end of generating an argument sequence, while existing production rule-based systems could easily generate arbitrary number of argument nodes using either a transition system (e.g., Yin and Neubig, 2018) or a special neural component to compute end-of-argument-list probability (e.g., Rabinovich et al. (2017)), without using separate production rules of different arity.
- *Syntactic Copy Mechanism* While the proposed syntactic terminal token copy mechanism (Section 3.3) could be better than the vanilla sequential one, there have already been syntactic copying models capable of copying both terminal tokens and partial ASTs from the context (Yin et al., 2019).
How to Improve: to better understand the different technical contributions outlined above and their relative impacts, the following ablation studies would be helpful:
- Importance of Path-based Context Encoding: the Seq→Path ablation in Table 3 alone might not be adequate to demonstrate the importance of path-based encoding of AST contexts for code generation tasks. The authors should compare with the GNN-based context encoding approach in Brockschmidt et al. (2019a) as this is the most relevant work. The original GNN→NAG model cited in Table 2 used a much simpler copying mechanism and a vanilla production-based tree generation model, and therefore not directly comparable with a tuned SLM.
- Importance of Node-based Tree Generation Model: If possible, the authors might consider swapping their node-based tree generation model with a state-of-the-art production-based approach (e.g., Yin et al., 2019) to demonstrate its effectiveness.
*Claims* The authors claimed in the beginning of the paper that previous program synthesis approaches are either restricted in domains (e.g., DSLs like SQL) or grammars (e.g., restricted grammar of the full language), therefore coining the proposed approach as "any-code generation". However, there are indeed code generation systems (some of them cited in this paper) that synthesize open-domain code snippets in general-porpuse programming languages without restriction on vocabulary or grammar. To give a few examples, Iyer et al. (2018) generate open-domain Java class member functions; Rabinovich et al. (2017) and Zhao et al. (2019) predict full Python classes or partial snippets, while Yin et al. (2019) synthesize open-domain short C# code diffs observed in GitHub commits. In fact, the the proposed "any-code generation" benchmark is limited to sub-expressions defined within a function, whose scope is more restricted than other benchmarks like CONCODE.
How to improve: the authors might present more evidence to substantiate the their claim on the novelty of the AnyGen benchmark compared with existing open-domain, general purpose code synthesis benchmarks, or consider revising the claim and the title.
References:
* Zhao et al., Neural Networks for Modeling Source Code Edits. 2019
* Yin et al., Learning to Represent Edits. 2019 |
ICLR | Title
RainNet: A Large-Scale Imagery Dataset for Spatial Precipitation Downscaling
Abstract
Contemporary deep learning frameworks have been applied to solve meteorolog1 ical problems (e.g., front detection, synthetic radar generation, precipitation now2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation 3 downscaling is one of the most important meteorological problems. However, 4 the lack of a well-organized and annotated large-scale dataset hinders the training 5 and verification of more effective and advancing deep-learning models for precip6 itation downscaling. To alleviate these obstacles, we present the first large-scale 7 spatial precipitation downscaling dataset named RainNet, which contains more 8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over 9 17 years, ready to help the evolution of deep models in precipitation downscal10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great 12 help to improve the model generalization ability. In addition, the map pairs in 13 RainNet are organized in the form of image sequences (720 maps per month or 14 1 map/hour), showing complex physical properties, e.g., temporal misalignment, 15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are 16 specifically introduced to evaluate or verify the comprehensive performance of the 17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the 18 applications of RainNet, 14 state-of-the-art models, including deep models and 19 traditional approaches, are evaluated. To fully explore potential downscaling so20 lutions, we propose an implicit physical estimation framework to learn the above 21 characteristics. Extensive experiments demonstrate that the value of RainNet in 22 training and evaluating downscaling models. 23
N/A
Contemporary deep learning frameworks have been applied to solve meteorolog-1 ical problems (e.g., front detection, synthetic radar generation, precipitation now-2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation3 downscaling is one of the most important meteorological problems. However,4 the lack of a well-organized and annotated large-scale dataset hinders the training5 and verification of more effective and advancing deep-learning models for precip-6 itation downscaling. To alleviate these obstacles, we present the first large-scale7 spatial precipitation downscaling dataset named RainNet, which contains more8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over9 17 years, ready to help the evolution of deep models in precipitation downscal-10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var-11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great12 help to improve the model generalization ability. In addition, the map pairs in13 RainNet are organized in the form of image sequences (720 maps per month or14 1 map/hour), showing complex physical properties, e.g., temporal misalignment,15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are16 specifically introduced to evaluate or verify the comprehensive performance of the17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the18 applications of RainNet, 14 state-of-the-art models, including deep models and19 traditional approaches, are evaluated. To fully explore potential downscaling so-20 lutions, we propose an implicit physical estimation framework to learn the above21 characteristics. Extensive experiments demonstrate that the value of RainNet in22 training and evaluating downscaling models.23
1 INTRODUCTION24
Deep learning has made an enormous breakthrough in the field of computer vision, which is ex-25 tremely good at extracting valuable knowledge from numerous amounts of data. In recent years,26 with computer science development, a deluge of Earth system data is continuously being obtained,27 coming from sensors all over the earth and even in space. These ever-increasing massive amounts of28 data with different sources and structures challenge the geoscience community, which lacks practi-29 cal approaches to understand and further utilize the raw data (Reichstein et al. (2019)). Specifically,30 several preliminary works (Groenke et al. (2020); White et al. (2019); He et al. (2016); Ravuri et al.31 (2021); Angell & Sheldon (2018); Veillette et al. (2020)) try to introduce machine learning and deep32 learning frameworks to solve meteorological problems, e.g., spatial precipitation downscaling.33
In this paper, we focus on the spatial precipitation downscaling task. Spatial precipitation down-34 scaling is a procedure to infer high-resolution meteorological information from low-resolution vari-35 ables, which is one of the most important upstream components for meteorological task (Bauer et al.36 (2015)). The precision of weather and climate prediction is highly dependent on the resolution and37 reliability of the initial environmental input variables, and spatial precipitation downscaling is the38 most promising solution. The improvement of the weather/climate forecast and Geo-data quality39 saves tremendous money and lives; with the fiscal year 2020 budget over $1 billion, NSF funds40 thousands of colleges in the U.S. to research on these topics (NSF (2020)).41
Unfortunately, there are looming issues hinders the research of spatial precipitation downscaling42 in the machine learning community: 1). Lack of ”machine-learning ready” datasets. The existing43
machine-learning-based downscaling methods are only applied to ideal retrospective problems and44 verified on simulated datasets (e.g., mapping bicubic of precipitation generated by weather fore-45 cast model to original data (Berrisford et al. (2011))), which significantly weakens the credibility46 of the feasibility, practicability, and effectiveness of the methods. It is worth mentioning that the47 data obtained by the simulated degradation methods (e.g., bicubic) is completely different from the48 real data usually collected by two measurement systems (e.g., satellite and radar) with different49 precision. The lack of a well-organized and annotated large-scale dataset hinders the training and50 verification of more effective and complex deep-learning models for precipitation downscaling. 2).51 Lack of tailored metrics to evaluate machine-learning-based frameworks. Unlike deep learning (DL)52 and machine learning (ML) communities, scientists in meteorology usually employ maps/charts to53 assessing downscaling models case by case based on domain knowledge (He et al. (2016); Walton54 et al. (2020)), which hinders the application of Rainnet in DL/ML communities. For example, (He55 et al. (2016)) use log-semivariance (spatial metrics for local precipitation), quantile-quantile maps56 to analyzing the maps. 3). an efficient downscaling deep-learning framework should be established.57 Contrary to image data, this real precipitation dataset covers various types of real meteorological58 phenomena (e.g., Hurricane, Squall, e.t.c.), and shows the physical characters (e.g., temporal mis-59 alignment, temporal sparse and fluid properties, e.t.c.) that challenge the downscaling algorithms.60 Traditional computationally dense physics-driven downscaling methods are powerless to handle the61 increasing meteorological data size and flexible to multiple data sources.62
To alleviate these obstacles, we propose the first large-scale spatial precipitation downscaling dataset63 named RainNet, which contains more than 62, 400 pairs of high-quality low/high-resolution precip-64 itation maps for over 17 years, ready to help the evolution of deep models in spatial precipitation65 downscaling. The proposed dataset covers more than 9 million square kilometers of land area, which66 contains both wet and dry seasons and diverse meteorological phenomena. To facilitate DL/ML and67 other researchers to use RainNet, we introduce 6 most concerning indices to evaluate downscaling68 models: mesoscale peak precipitation error (MPPE), heavy rain region error (HRRE), cumulative69 precipitation mean square error (CPMSE), cluster mean distance (CMD), heavy rain transition speed70 (HRTS) and average miss moving degree (AMMD). In order to further simplify the application of in-71 dices, we abstract them into two weighted and summed metrics: Precipitation Error Measure (PEM)72 and Precipitation Dynamics Error Measure (PDEM). Unlike video super-resolution, the motion of73 the precipitation region is non-rigid (i.e., fluid), while video super-resolution mainly concerns rigid74 body motion estimation. To fully explore how to alleviate the mentioned predicament, we propose75 an implicit dynamics estimation driven downscaling deep learning model. Our model hierarchi-76 cally aligns adjacent precipitation maps, that is, implicit motion estimation, which is very simple77 but exhibits highly competitive performance. Based on meteorological science, we also proved that78 the dataset we constructed contained the full information people may need to recover the higher79 resolution observations from lower resolution ones.80
The main contributions of this paper are:81
• To the best of our knowledge, we present the first REAL (non-simulated) Large-Scale Spa-82 tial Precipitation Downscaling Dataset for deep learning;83
• We introduce 2 simple metrics to evaluate the downscaling models;84
• We propose a downscaling model with strong competitiveness. We evaluate 14 competitive85 potential solutions on the proposed dataset, and analyze the feasibility and effectiveness of86 these solutions.87
2 BACKGROUND88
At the beginning of the 19th century, geoscientists recognized that predicting the state of the atmo-89 sphere could be treated as an initial value problem of mathematical physics, wherein future weather90 is determined by integrating the governing partial differential equations, starting from the observed91 current weather. Today, this paradigm translates into solving a system of nonlinear differential92 equations at about half a billion points per time step and accounting for dynamic, thermodynamic,93 radiative, and chemical processes working on scales from hundreds of meters to thousands of kilo-94 meters and from seconds to weeks (Bauer et al. (2015)). The Navier–Stokes and mass continuity95 equations (including the effect of the Earth’s rotation), together with the first law of thermodynamics96
and the ideal gas law, represent the full set of prognostic equations in the atmosphere, describing the97 change in space and time of wind, pressure, density and temperature is described (formulas given in98 supplementary) (Bauer et al. (2015)). These equations have to be solved numerically using spatial99 and temporal discretization because of the mathematical intractability of obtaining analytical solu-100 tions, and this approximation creates a distinction between so-called resolved and unresolved scales101 of motion.102
2.1 SPATIAL DOWNSCALING OF PRECIPITATION103
The global weather forecast model, treated as a computational problem, relying on high-quality104 initial data input. The error of weather forecast would increase exponentially over time from this105 initial error of input dataset. Downscaling is one of the most important approaches to improve the106 initial input quality. Precipitation is one of the essential atmospheric variables that are related to daily107 life. It could easily be observed, by all means, e.g., gauge station, radar, and satellites. Applying108 downscaling methods to precipitation and creating high-resolution rainfall is far more meaningful109 than deriving other variables, while it is the most proper initial task to test deep learning’s power110 in geo-science. The traditional downscaling methods can be separated into dynamic and statistical111 downscaling.112
Dynamic downscaling treats the downscaling as an optimization problem constraint on the physical113 laws. The dynamic downscaling methods find the most likely precipitation over space and time114 under the pre-defined physical law. It usually takes over 6 hours to downscale a 6-hour precipitation115 scenario globally on supercomputers (Courtier et al. (1994)). As the dynamic downscaling relying116 on pre-defined known macroscopic physics, a more flexible weather downscaling framework that117
could easily blend different sources of observations and show the ability to describe more complex118 physical phenomena on different scales is desperately in need.119
Statistical downscaling is trying to speed up the dynamic downscaling process. The input of statisti-120 cal downscaling is usually dynamic model results or two different observation datasets on different121 scales. However, due to the quality of statistical downscaling results, people rarely apply statistical122 downscaling to weather forecasts. These methods are currently applied in the tasks not requir-123 ing high data quality but more qualitative understanding, e.g., climate projection, which forecasts124 the weather for hundreds of years on coarse grids and using statistical downscaling to get detailed125 knowledge of medium-scale future climate system.126
3 RAINNET: SPATIAL PRECIPITATION DOWNSCALING IMAGERY DATASET127
3.1 DATA COLLECTION AND PROCESSING128
To build up a standard realistic (non-simulated) downscaling dataset for computer vision, we129 selected the eastern coast of the United States, which covers a large region (7 million km2;130 105◦ ∼ 65◦W , 25◦ ∼ 50◦N , GNU Free Documentation License 1.2) and has a 20-year high-quality131 precipitation observations. We collected two precipitation data sources from National Stage IV QPE132 Product (StageIV (Nelson et al. (2016)); high resolution at 0.04◦ (approximately 4km), GNU Free133 Documentation License 1.2) and North American Land Data Assimilation System (NLDAS (Xia134 et al. (2012)); low resolution at 0.125◦ (approximately 13km)). StageIV is mosaicked into a na-135 tional product at National Centers for Environmental Prediction (NCEP), from the regional hourly/6-136 hourly multi-sensor (radar+gauges) precipitation analyses (MPEs) produced by the 12 River Fore-137 cast Centers over the continental United States with some manual quality control done at the River138 Forecast Centers (RFCs). NLDAS is constructed quality-controlled, spatially-and-temporally con-139 sistent datasets from the gauges and remote sensors to support modeling activities. Both products140 are hourly updated and both available from 2002 to the current age.141
In our dataset, we further selected the eastern coast region for rain season (July ∼ November,142 covering hurricane season; hurricanes pour over 10% annual rainfall in less than 10 days). We143 matched the coordinate system to the lat-lon system for both products and further labeled all the144 hurricane periods happening in the last 17 years. These heavy rain events are the largest challenge145 for weather forecasting and downscaling products. As heavy rain could stimulus a wide-spreading146 flood, which threatening local lives and arousing public evacuation. If people underestimate the147 rainfall, a potential flood would be underrated; while over-estimating the rainfall would lead to148 unnecessary evacuation orders and flood protection, which is also costly.149
3.2 DATASET STATISTICS150
At the time of this work, we have collected and processed precipitation data for the rainy season151 for 17 years from 2002 to 2018. One precipitation map pair per hour, 24 precipitation map pairs152 per day. In detail, we have collected 85 months or 62424 hours, totaling 62424 pairs of high-153 resolution and low-resolution precipitation maps. The size of the high-resolution precipitation map154 is 624 × 999, and the size of the low-resolution is 208 × 333. Various meteorological phenomena155 and precipitation conditions (e.g., hurricanes, squall lines, e.t.c.) are covered in these data. The156 precipitation map pairs in RainNet are stored in HDF5 files that make up 360 GB of disk space. We157 select 2 typical meteorological phenomena and visualize them in Fig. 1. Our data is collected from158 satellites, radars, gauge stations, e.t.c., which covers the inherent working characteristics of different159 meteorological measurement systems. Compared with traditional methods that generate data with160 different resolutions through physical model simulation, our dataset is of great help for deep models161 to learn real meteorological laws.162
3.3 DATASET ANALYSIS163
In order to help design a more appropriate and effective precipitation downscaling model, we have164 explored the property of the dataset in depth. As mentioned above, our dataset is collected from mul-165 tiple sensor sources (e.g., satellite, weather radar, e.t.c.), which makes the data show a certain extent166 of misalignment. Our efforts here are not able to vanquish the misalignment. This is an intrinsic167
problem brought by the fusion of multi-sensor meteorological data. Limited by observation meth-168 ods (e.g., satellites can only collect data when they fly over the observation area), meteorological169 data is usually temporal sparse, e.g., in our dataset, the sampling interval between two precipitation170 maps is one hour. The temporal sparse leads to serious difficulties in the utilization of precipitation171 sequences. Additionally, the movement of the precipitation position is directly related to the cloud.172 It is a fluid movement process that is completely different from the rigid body movement concerned173 in Super-Resolution. At the same time, the cloud will grow or dissipate in the process of flowing174 and even form new clouds, which further complicates the process. In the nutshell, although existed175 SR is a potential solution for downscaling, there is a big difference between the two. Especially,176 the three characteristics of downscaling mentioned above: temporal misalignment, temporal sparse,177 fluid properties, which make the dynamic estimation of precipitation more challenging.178
4 EVALUATION METRICS179
Due to the difference between downscaling and traditional figure super-resolution, the metrics that180 work well under SR tasks may not be sufficient for precipitation downscaling. By gathering the181 metrics from the meteorologic literature (the literature includes are Zhang & Yang (2004); Maraun182 et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020); Wootten et al. (2020)),183 we select and rename 6 most common metrics (a metrics may have multiple names in different184 literature) to reflect the downscaling quality: mesoscale peak precipitation error (MPPE), cumulative185 precipitation mean square error (CPMSE), heavy rain region error (HRRE) , cluster mean distance186 (CMD), heavy rain transition speed (HRTS) and average miss moving degree (AMMD).These 6187 metrics can be separated as reconstruction metrics: MPPE, HRRE, CPMSE, AMMD, and dynamic188 metrics: HRTS and CMD.189
The MPPE (mm/hour) is calculated as the difference of top quantile between the generated/real190 rainfall dataset which considering both spatial and temporal property of mesoscale meteorological191 systems, e.g., hurricane, squall. This metric is used in most of these papers (for example Zhang192 & Yang (2004); Maraun et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020);193 Wootten et al. (2020) suggest the quantile analysis to evaluate the downscaling quality).194
The CPMSE (mm2/hour2) measures the cumulative rainfall difference on each pixel over the time-195 axis of the test set, which shows the spatial reconstruction property. Similar metrics are used in196 Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) calculated as the pixel level197 difference of monthly rainfall and used in He et al. (2016) as a pixel level difference of cumulative198 rainfall with different length of record.199
The HRRE (km2) measures the difference of heavy rain coverage on each time slide between gen-200 erated and labeled test set, which shows the temporal reconstruction ability of the models. The201 AMMD (radian) measures the average angle difference between main rainfall clusters. Similar202 metrics are used in Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) as rainfall203 coverage of a indefinite number precipitation level and used in He et al. (2016); Pryor & Schoof204 (2020) as a continuous spatial analysis.205
As a single variable dataset, it is hard to evaluate the ability of different models to capture the206 precipitation dynamics when temporal information is not included (a multi-variable dataset may207 have wind speed, a typical variable representing dynamics, included). So here we introduce the208 first-order temporal and spatial variables to evaluate the dynamical property of downscaling results.209 Similar approaches are suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020).210 The CMD (km) physically compares the location difference of the main rainfall systems between211 the generated and labeled test set, which could be also understand as the RMSE of the first order212 derivative of precipitation data on spatial directions.The HRTS (km/hour) measures the difference213 between the main rainfall system moving speed between the generated and labeled test set which214 shows the ability for models to capture the dynamic property, which could be also understand as the215 RMSE of the first order derivative of precipitation data on temporal direction.Similar metrics are216 suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020) as the auto-regression217 analysis and the differential analysis.218
More details about the metrics and their equations are given in supplementary materials. One met-219 rics group (MPPE, HRRE, CPMSE, AMMD) mainly measures the rainfall deviation between the220
generated precipitation maps and GT. The other group (HRTS and CMD) mainly measures the221 dynamic deviation of generated precipitation maps. In order to further simplify the application222 of indices, we abstract them into two weighted and summed metrics: Precipitation Error Mea-223 sure (PEM) and Precipitation Dynamics Error Measure (PDEM). We first align the dimensions224 of these two groups of metrics respectively. The first group of metrics (MPPE, HRRE, CPMSE,225 AMMD) is normalized, weighted and summed to get the precipitation error measure (PEM). Ac-226 cording to Gupta et al. (1999), all the metrics are transferred to Percent Bias (PBIAS) to be suit-227 able for metrics weighting. The original definition of PBIAS is the bias divided by observation, as228 PBIAS = |Qmodel − Qobs|/|Qobs|. Here we rewrite the original metrics to PBIAS by dividing229 the metrics with annual mean observations of the original variables (AMO), as PBIASPEMi =230 |MetricsPEMi |/|AMOPEMi |,MetricsPEMi = {MPPE,HRRE,CPMSE,AMMD}. In our231 dataset, AMOPEMMPPE = 64, AMO PEM HRREM = 533, AMO PEM CPMSE = 0.64, AMO PEM AMMD = 332,232
AMOPEMHRTS = 15, AMO PEM CMD = 26. The metrics then are ensembled to a single metric233 (PEM) with equal weight, as PEM = ∑
i 0.25 · PBIASPEMi . Following the same procedure,234 we then ensemble the second group of dynamic metrics (HRTS and CMD) to a single metrics235 PDEM = ∑ i 0.5 · PBIASPDEMi .236
We also include the most common used metrics RMSE as one single metrics in our metrics list.237 RMSE could evaluate both reconstruction and dynamic property of the downscaling result.238
5 APPLICATIONS OF RAINNET IN SPATIAL PRECIPITATION DOWNSCALING239
As a potential solution, Super-Resolution (SR) frameworks are generally divided into the Single-240 Image Super-Resolution (SISR) and the Video Super-Resolution (VSR). Video Super-Resolution is241 able to leverage multi-frame information to restore images, which better matches the nature of down-242 scaling. We will demonstrate this judgment in Sec. 6.1. The VSR pipeline usually contains three243 components: deblurring, inter-frame alignment, and super-resolution. Deblurring and inter-frame244 alignment are implemented by the motion estimation module. There are four motion estimation245 frameworks: 1). RNN based (Keys (1981); Tao et al. (2017); Huang et al. (2015); Haris et al.246 (2019)); 2). Optical Flow (Xue et al. (2019)); 3). Deformable Convolution based (Tian et al. (2020);247 Xiang et al. (2020); Wang et al. (2019)); 4). Temporal Concatenation (Jo et al. (2018); Caballero248 et al. (2017); Liao et al. (2015)). In fact, there is another motion estimation scheme proposed for249 the first time in the noise reduction task (Tassano et al. (2020)), which achieves an excellent video250 noise reduction performance. Inspired by (Tassano et al. (2020)), we design an implicit dynamics251 estimation model for the spatial precipitation downscaling. It is worth mentioning that our proposed252 model and the above four frameworks together form a relatively complete candidate set of dynamic253 estimation solutions.254
Proposed Framework. As shown in Fig. 2, our framework consists of two components: Implicit255 dynamic estimation module and downscaling Backbone. These two parts are trained jointly. Suppose256 there areN adjacent low-resolution precipitation maps {IL
T−N−12 , .., ILT , ..., I L T+N−12 }. The task is to257 reconstruct the high-resolution precipitation map IHT of I L T . The implicit dynamic estimation module258 is composed of multiple vanilla networks A = {A1, ...,AN−2} (N = 5 in this paper) sharing259 weights. Each vanilla network receives three adjacent frames as input, outputs, and intermediate260 results. The intermediate result can be considered as a frame with implicit dynamic alignment. We261
concatenate all the intermediate frames as the input of the next module. The specific structure of262 the vanilla network can be found in the supplementary materials. The main task of the downscaling263 backbone is to restore the high-resolution precipitation map IHT based on the aligned intermediate264 frames. In order to make full use of multi-scale information, we use multiple Residual-in-Residual265 Dense Blocks (Wang et al. (2018)) in the network. We employ the interpolation+convolution (Odena266 et al. (2016)) as the up-sampling operator to reduce the checkerboard artifacts. After processing by267 downscaling backbone we get the final estimated HR map ÎHT .268
Model objective. The downscaling task is essentially to restore high-resolution precipitation maps.269 We learn from the super-resolution task and also applyL1 and perceptual loss (Johnson et al. (2016))270 as the training loss of our model. The model objective is shown below:271
L(̂IHT , IHT ) =‖ ÎHT − IHT ‖1 +λ ‖ φ(̂IHT )− φ(IHT ) ‖2, (1) where φ denotes the pre-trained VGG19 network (Simonyan & Zisserman (2015)), we select the272 Relu5 − 4 (without the activator (Wang et al. (2018))) as the output layer. λ is the coefficient to273 balance the loss terms. λ = 20 in our framework.274
6 EXPERIMENTAL EVALUATION275
We conduct spatial precipitation downscaling experiments to illustrate the application of our276 proposed RainNet and evaluate the effectiveness of the benchmark downscaling frameworks. Fol-277 lowing the mainstream evaluation protocol of DL/ML communities, cross-validation is employed.278 In detail, we divide the dataset into 17 parts (2002.7∼2002.11, 2003.7∼2003.11, 2004.7∼2004.11,279 2005.7∼2005.11, 2006.7∼2006.11, 2007.7∼2007.11, 2008.7∼2008.11, 2009.7∼2009.11,280 2010.7∼2010.11, 2011.7∼2011.11, 2012.7∼2012.11, 2013.7∼2013.11, 2014.7∼2014.11,281 2015.7∼2015.11, 2016.7∼2016.11, 2017.7∼2017.11, 2018.7∼2018.11) by year, and sequentially282 employ each year as the test set and the remaining 16 years as the training set, that is, 17-fold283 cross-validation. All models maintain the same training settings and hyperparameters during the284 training phase. These data cover various complicated precipitation situations such as hurricanes,285 squall lines, different levels of rain, and sunny days. It is sufficient to select the rainy season of286 the year as the test set from the perspective of meteorology, as the climate of one area is normally287 stable.288
6.1 BASELINES289
The SISR/VSR and the spatial precipitation down-290 scaling are similar to some extent, so we argue that291 the SR models can be applied to the task as the292 benchmark models. The input of SISR is a single293 image, and the model infers a high-resolution image294 from it. Its main focus is to generate high-quality295 texture details to achieve pleasing visual effects. In296 contrast, VSR models input multiple frames of im-297 ages (e.g., 3 frames, 5 frames, e.t.c.). In our experi-298 ments, we employ 5 frames. The core idea of VSR299 models is to increase the resolution by complement-300 ing texture information between different frames. It301 is worth mentioning that VSR models generally are302 equipped with a motion estimation module to alle-303 viate the challenge of object motion to inter-frame304 information registration.305
We evaluated 7 state-of-the-art SISR frameworks306 (i.e., Bicubic (Keys (1981)), SRCNN1 (Dong307 et al. (2016)), SRGAN2 (Ledig et al. (2017)),308
1https://github.com/yjn870/SRCNN-pytorch 2https://github.com/leftthomas/SRGAN
EDSR3 (Lim et al. (2017)), ESRGAN4 (Wang et al. (2018)), DBPN5 (Haris et al. (2018)),309 RCAN6 (Zhang et al. (2018)) and 5 VSR frameworks (i.e., SRGAN-V, EDSR-V, ESRGAN-V,310 RBPN7 (Haris et al. (2019)), EDVR8 (Wang et al. (2019)), of which 3 VSR methods (i.e., SRGAN-311 V, EDSR-V, ESRGAN-V) are modified from SISR. In particular, we build SRGAN-V, EDSR-V and312 ESRGAN-V by concatenating multiple frames of precipitation maps as the input of the model. In313 addition, we also evaluated the traditional statistics method Kriging (Stein (2012)), which is widely314 applied in weather forecasting. The mentioned 8 metrics are used to quantitatively evaluate the315 performance of these SR models and our method. Further, we select some disastrous weather as316 samples for qualitative analysis to test the model’s ability to learn the dynamic properties of the317 weather system. And we employ the implementation of Pytorch for Bicubic. We use 4 NVIDIA318 2080 Ti GPUs for training. We train all models with following setting. The batch size is set as 24.319 Precipitation maps are random crop into 64× 64. We employ the Adam optimizer, beta1 is 0.9, and320 beta2 is 0.99. The initial learning rate is 0.001, which is reduced to 1/10 every 50 epochs, and a total321 of 200 epochs are trained. We evaluate benchmark frameworks with 17-fold cross-validation. The322 downscaling performances are shown in Tab. 1. We divide the indicators mentioned above into two323 groups. PDEM measures the model’s ability to learn the dynamics of precipitation. PEM illustrates324 the model’s ability to reconstruct precipitation.325
From Tab. 1, we can learn that the overall performance of the VSR methods are better than SISR326 models, which shows that the dynamic properties mentioned above are extremely important for the327 downscaling model. Furthermore, it can be seen from Fig. 3 that the SISR method is clustered in the328 upper right corner of the scatter plot, and the VSR method is concentrated in the lower-left corner,329 which further shows that the dynamic properties of the VSR methods are overall better than the SISR330 methods. In addition, our method achieves the 1st best performance in RMSE, PDE, and achieve the331 second-best performance on PEM. The score shows that the implicit dynamic estimation framework332
3https://github.com/sanghyun-son/EDSR-PyTorch 4https://github.com/xinntao/ESRGAN 5https://github.com/alterzero/DBPN-Pytorch 6https://github.com/yulunzhang/RCAN 7https://github.com/alterzero/RBPN-PyTorch 8https://github.com/xinntao/EDVR
used is feasible and effective. It is worth mentioning that the traditional downscaling method Kriging333 performs better than many deep learning models (e.g., SRGAN, ESRGAN)334
6.1.1 QUALITATIVE ANALYSIS335
We visualized the tropical cyclone precipitation map of the 166th hour (6th) in September 2010336 and the high-resolution precipitation map generated by different methods. As shown in Fig. 4, the337 best perceptual effects are generated by EDVR and Our framework. Zooming in the result image,338 the precipitation maps generated by SRGAN and EDSR present obvious checkerboard artifacts.339 The reason for the checkerboard artifacts should be the relatively simple and sparse texture pattern340 in precipitation maps. The results generated by Bicubic, RCAN, Kriging, and SRCNN are over-341 smooth. DBPN even cannot reconstruct the eye of the hurricane. Especially, the result generated by342 Kriging is as fuzzy as the input LR precipitation map. In conclusion, the visual effects generated343 by the VSR methods are generally better than the SISR methods and the traditional method. From344 the perspective of quantitative and qualitative analysis, the dynamics estimation framework is very345 critical for downscaling.346
7 CONCLUSION347
In this paper, we built the first large-scale real precipitation downscaling dataset for the deep learning348 community. This dataset has 62424 pairs of HR and LR precipitation maps in total. We believe this349 dataset will further accelerate the research on precipitation downscaling. Furthermore, we analyze350 the problem in-depth and put forward three key challenges: temporal misalignment, temporal sparse,351 fluid properties. In addition, we propose an implicit dynamic estimation model to alleviate the above352 challenges. At the same time, we evaluated the mainstream SISR and VSR models and found that353 none of these models can solve RainNet’s problems well. Therefore, the downscaling task on this354 dataset is still very challenging.355
This work still remains several open problems. Currently, the data domain of this research is limited356 to the eastern U.S. In future research, we would enlarge the dataset to a larger domain. The dataset357 is only a single variable now. In future research, we may include more variables, e.g. temperature358 and wind speed.359
REFERENCES360
Rico Angell and Daniel R. Sheldon. Inferring latent velocities from weather radar data using gaus-361 sian processes. In Advances in Neural Information Processing Systems 31: Annual Conference362 on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal,363 Canada, pp. 8998–9007, 2018.364
Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction.365 Nature, 2015.366
P. Berrisford, D.P. Dee, P. Poli, R. Brugge, Mark Fielding, Manuel Fuentes, P.W. Kållberg,367 S. Kobayashi, S. Uppala, and Adrian Simmons. The era-interim archive version 2.0. 2011.368
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang,369 and Wenzhe Shi. Real-time video super-resolution with spatio-temporal networks and motion370 compensation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR371 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017.372
PHILIPPE Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational imple-373 mentation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorolog-374 ical Society, 1994.375
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep376 convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 2016.377
Marie Ekström. Metrics to identify meaningful downscaling skill in wrf simulations of intense378 rainfall events. Environmental Modelling & Software, 79:267–284, 2016.379
Brian Groenke, Luke Madaus, and Claire Monteleoni. Climalign: Unsupervised statistical down-380 scaling of climate variables via normalizing flows. CoRR, 2020.381
Hoshin Vijai Gupta, Soroosh Sorooshian, and Patrice Ogou Yapo. Status of automatic calibration382 for hydrologic models: Comparison with multilevel expert calibration. Journal of hydrologic383 engineering, 4(2):135–143, 1999.384
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Deep back-projection networks385 for super-resolution. In 2018 IEEE Conference on Computer Vision and Pattern Recognition,386 CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018.387
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Recurrent back-projection net-388 work for video super-resolution. In IEEE Conference on Computer Vision and Pattern Recogni-389 tion, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE,390 2019.391
Xiaogang He, Nathaniel W Chaney, Marc Schleiss, and Justin Sheffield. Spatial downscaling of392 precipitation using adaptable random forests. Water resources research, 2016.393
Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-394 frame super-resolution. In Advances in Neural Information Processing Systems, 2015.395
Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution396 network using dynamic upsampling filters without explicit motion compensation. In 2018 IEEE397 Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA,398 June 18-22, 2018. IEEE Computer Society, 2018.399
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and400 super-resolution. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The401 Netherlands, October 11-14, 2016, Proceedings, Part II, 2016.402
Robert Keys. Cubic convolution interpolation for digital image processing. IEEE transactions on403 acoustics, speech, and signal processing, 1981.404
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro405 Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-406 realistic single image super-resolution using a generative adversarial network. In 2017 IEEE407 Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July408 21-26, 2017. IEEE Computer Society, 2017.409
Renjie Liao, Xin Tao, Ruiyu Li, Ziyang Ma, and Jiaya Jia. Video super-resolution via deep draft-410 ensemble learning. In 2015 IEEE International Conference on Computer Vision, ICCV 2015,411 Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2015.412
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep resid-413 ual networks for single image super-resolution. In 2017 IEEE Conference on Computer Vision414 and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26,415 2017. IEEE Computer Society, 2017.416
Douglas Maraun, Martin Widmann, José M Gutiérrez, Sven Kotlarski, Richard E Chandler, Elke417 Hertig, Joanna Wibig, Radan Huth, and Renate AI Wilcke. Value: A framework to validate418 downscaling approaches for climate change studies. Earth’s Future, 3(1):1–14, 2015.419
Brian R. Nelson, Olivier P. Prat, D.-J. Seo, and Emad Habib. Assessment and Implications of420 NCEP Stage IV Quantitative Precipitation Estimates for Product Intercomparisons. Weather and421 Forecasting, 2016.422
NSF. Nsf geosciences directorate funding by institution type. AGI Report, 2020.423
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts.424 Distill, 2016.425
SC Pryor and JT Schoof. Differential credibility assessment for statistical downscaling. Journal of426 Applied Meteorology and Climatology, 59(8):1333–1349, 2020.427
Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan428 Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skillful precipitation429 nowcasting using deep generative models of radar. Nature, 2021.430
Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Car-431 valhais, et al. Deep learning and process understanding for data-driven earth system science.432 Nature, 2019.433
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image434 recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning435 Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed-436 ings, 2015.437
Michael L Stein. Interpolation of spatial data: some theory for kriging. Springer Science & Business438 Media, 2012.439
Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-440 resolution. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy,441 October 22-29, 2017. IEEE Computer Society, 2017.442
Matias Tassano, Julie Delon, and Thomas Veit. Fastdvdnet: Towards real-time deep video denois-443 ing without flow estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern444 Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.445
Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. TDAN: temporally-deformable alignment446 network for video super-resolution. In 2020 IEEE/CVF Conference on Computer Vision and447 Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.448
Mark S. Veillette, Siddharth Samsi, and Christopher J. Mattioli. SEVIR : A storm event imagery449 dataset for deep learning applications in radar and satellite meteorology. In Advances in Neu-450 ral Information Processing Systems 33: Annual Conference on Neural Information Processing451 Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.452
Daniel Walton, Neil Berg, David Pierce, Ed Maurer, Alex Hall, Yen-Heng Lin, Stefan Rahimi, and453 Dan Cayan. Understanding differences in california climate projections produced by dynamical454 and statistical downscaling. Journal of Geophysical Research: Atmospheres, 2020.455
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen456 Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings457 of the European Conference on Computer Vision (ECCV), pp. 0–0, 2018.458
Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration459 with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on460 Computer Vision and Pattern Recognition Workshops, 2019.461
BL White, A Singh, and A Albert. Downscaling numerical weather models with gans. AGUFM,462 2019.463
Adrienne M Wootten, Elias C Massoud, Agniv Sengupta, Duane E Waliser, and Huikyo Lee. The464 effect of statistical downscaling on the weighting of multi-model ensembles of precipitation. Cli-465 mate, 8(12):138, 2020.466
Youlong Xia, Kenneth Mitchell, Michael Ek, Justin Sheffield, Brian Cosgrove, Eric Wood, Lifeng467 Luo, Charles Alonge, Helin Wei, Jesse Meng, Ben Livneh, Dennis Lettenmaier, Victor Koren,468 Qingyun Duan, Kingtse Mo, Yun Fan, and David Mocko. Continental-scale water and energy469 flux analysis and validation for the north american land data assimilation system project phase470 2 (nldas-2): 1. intercomparison and application of model products. Journal of Geophysical Re-471 search: Atmospheres, 2012.472
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, and Chenliang Xu. Zooming473 slow-mo: Fast and accurate one-stage space-time video super-resolution. In 2020 IEEE/CVF474 Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June475 13-19, 2020. IEEE, 2020.476
Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T. Freeman. Video enhancement477 with task-oriented flow. Int. J. Comput. Vis., 2019.478
Xuebin Zhang and Feng Yang. Rclimdex (1.0) user manual. Climate Research Branch Environment479 Canada, 22, 2004.480
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-481 resolution using very deep residual channel attention networks. In Computer Vision - ECCV482 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part483 VII, Lecture Notes in Computer Science. Springer, 2018.484 | 1. What is the focus of the paper regarding precipitation downscaling?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and evaluation metrics?
3. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content?
4. Are there any concerns regarding the comparisons made between the proposed method and baseline methods?
5. How does the reviewer view the significance of this work in relation to the existing literature on applying deep learning to statistical downscaling? | Summary Of The Paper
Review | Summary Of The Paper
The authors present a novel, high resolution dataset for precipitation downscaling collected from observational data in the southeastern US. In addition to the dataset, several novel metrics for evaluating statistical downscaling methods are presented, as well as a novel deep learning algorithm based on video super resolution and "implicit dynamics" estimation.
Review
This work was recently submitted to NeurIPS 2021. I would highly recommend that all reviewers and area chairs read the discussion there and compare the previously submitted version of this work to the version that is under review here.
EDIT: Unfortunately, NeurIPS changed their visibility policy in 2021 for rejected submissions, so the discussion is no longer available to the public. Apologies for the confusion.
The authors have made a few improvements to address the concerns raised in this original discussion:
Cross validation results are presented to provide a more robust comparison between models.
The discussion of metrics in section 4 is much more thorough than in the previous version.
Unfortunately, several issues remain unaddressed.
Quantitative results do not include standard errors or any indication of the variance in cross validation. This was one of the top concerns raised in the previous discussion. The per-fold results posted by the authors in this thread show a substantial amount of variance in the relative ranking of the evaluated methods between folds, so this is very important for transparency and fair model comparison.
Inadequate discussion of hyperparameter tuning, particularly for the baselines. The description of hyperparameter settings improved in this version of the work, but the discussion on how hyperparameters were selected did not. There is no discussion of hyperparameter tuning methodology and/or how the hyperparameters for baseline methods were chosen. It is both unfair and uninteresting to compare a model which was tuned for a statistical downscaling task to a baseline which was tuned for natural image super resolution.
Inadequate framing of work in context of existing literature. The literature review is sparse and does not provide any discussion of how the authors work fits into the broader context of the literature on applying deep learning to statistical downscaling. There is also no discussion of existing frameworks often used for downscaling evaluation such as VALUE and Climdex (citations are provided but there are no mentions in the text).
Poor presentation of qualitative results. Figure 4 in particular is difficult to evaluate due to the poor rendering quality and presentation format. As mentioned in the previous discussion, the low/high resolution ground truth are too small relative to the other images for meaningful visual comparison.
Language and formatting errors. The text is still littered with typos, grammatical errors, and problematic (sometimes even confusing) diction. Several such issues that were pointed out directly by reviewers in the previous discussion still remain, which seems to indicate that the authors did not bother to further proof-read or revise the text. |
ICLR | Title
RainNet: A Large-Scale Imagery Dataset for Spatial Precipitation Downscaling
Abstract
Contemporary deep learning frameworks have been applied to solve meteorolog1 ical problems (e.g., front detection, synthetic radar generation, precipitation now2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation 3 downscaling is one of the most important meteorological problems. However, 4 the lack of a well-organized and annotated large-scale dataset hinders the training 5 and verification of more effective and advancing deep-learning models for precip6 itation downscaling. To alleviate these obstacles, we present the first large-scale 7 spatial precipitation downscaling dataset named RainNet, which contains more 8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over 9 17 years, ready to help the evolution of deep models in precipitation downscal10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great 12 help to improve the model generalization ability. In addition, the map pairs in 13 RainNet are organized in the form of image sequences (720 maps per month or 14 1 map/hour), showing complex physical properties, e.g., temporal misalignment, 15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are 16 specifically introduced to evaluate or verify the comprehensive performance of the 17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the 18 applications of RainNet, 14 state-of-the-art models, including deep models and 19 traditional approaches, are evaluated. To fully explore potential downscaling so20 lutions, we propose an implicit physical estimation framework to learn the above 21 characteristics. Extensive experiments demonstrate that the value of RainNet in 22 training and evaluating downscaling models. 23
N/A
Contemporary deep learning frameworks have been applied to solve meteorolog-1 ical problems (e.g., front detection, synthetic radar generation, precipitation now-2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation3 downscaling is one of the most important meteorological problems. However,4 the lack of a well-organized and annotated large-scale dataset hinders the training5 and verification of more effective and advancing deep-learning models for precip-6 itation downscaling. To alleviate these obstacles, we present the first large-scale7 spatial precipitation downscaling dataset named RainNet, which contains more8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over9 17 years, ready to help the evolution of deep models in precipitation downscal-10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var-11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great12 help to improve the model generalization ability. In addition, the map pairs in13 RainNet are organized in the form of image sequences (720 maps per month or14 1 map/hour), showing complex physical properties, e.g., temporal misalignment,15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are16 specifically introduced to evaluate or verify the comprehensive performance of the17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the18 applications of RainNet, 14 state-of-the-art models, including deep models and19 traditional approaches, are evaluated. To fully explore potential downscaling so-20 lutions, we propose an implicit physical estimation framework to learn the above21 characteristics. Extensive experiments demonstrate that the value of RainNet in22 training and evaluating downscaling models.23
1 INTRODUCTION24
Deep learning has made an enormous breakthrough in the field of computer vision, which is ex-25 tremely good at extracting valuable knowledge from numerous amounts of data. In recent years,26 with computer science development, a deluge of Earth system data is continuously being obtained,27 coming from sensors all over the earth and even in space. These ever-increasing massive amounts of28 data with different sources and structures challenge the geoscience community, which lacks practi-29 cal approaches to understand and further utilize the raw data (Reichstein et al. (2019)). Specifically,30 several preliminary works (Groenke et al. (2020); White et al. (2019); He et al. (2016); Ravuri et al.31 (2021); Angell & Sheldon (2018); Veillette et al. (2020)) try to introduce machine learning and deep32 learning frameworks to solve meteorological problems, e.g., spatial precipitation downscaling.33
In this paper, we focus on the spatial precipitation downscaling task. Spatial precipitation down-34 scaling is a procedure to infer high-resolution meteorological information from low-resolution vari-35 ables, which is one of the most important upstream components for meteorological task (Bauer et al.36 (2015)). The precision of weather and climate prediction is highly dependent on the resolution and37 reliability of the initial environmental input variables, and spatial precipitation downscaling is the38 most promising solution. The improvement of the weather/climate forecast and Geo-data quality39 saves tremendous money and lives; with the fiscal year 2020 budget over $1 billion, NSF funds40 thousands of colleges in the U.S. to research on these topics (NSF (2020)).41
Unfortunately, there are looming issues hinders the research of spatial precipitation downscaling42 in the machine learning community: 1). Lack of ”machine-learning ready” datasets. The existing43
machine-learning-based downscaling methods are only applied to ideal retrospective problems and44 verified on simulated datasets (e.g., mapping bicubic of precipitation generated by weather fore-45 cast model to original data (Berrisford et al. (2011))), which significantly weakens the credibility46 of the feasibility, practicability, and effectiveness of the methods. It is worth mentioning that the47 data obtained by the simulated degradation methods (e.g., bicubic) is completely different from the48 real data usually collected by two measurement systems (e.g., satellite and radar) with different49 precision. The lack of a well-organized and annotated large-scale dataset hinders the training and50 verification of more effective and complex deep-learning models for precipitation downscaling. 2).51 Lack of tailored metrics to evaluate machine-learning-based frameworks. Unlike deep learning (DL)52 and machine learning (ML) communities, scientists in meteorology usually employ maps/charts to53 assessing downscaling models case by case based on domain knowledge (He et al. (2016); Walton54 et al. (2020)), which hinders the application of Rainnet in DL/ML communities. For example, (He55 et al. (2016)) use log-semivariance (spatial metrics for local precipitation), quantile-quantile maps56 to analyzing the maps. 3). an efficient downscaling deep-learning framework should be established.57 Contrary to image data, this real precipitation dataset covers various types of real meteorological58 phenomena (e.g., Hurricane, Squall, e.t.c.), and shows the physical characters (e.g., temporal mis-59 alignment, temporal sparse and fluid properties, e.t.c.) that challenge the downscaling algorithms.60 Traditional computationally dense physics-driven downscaling methods are powerless to handle the61 increasing meteorological data size and flexible to multiple data sources.62
To alleviate these obstacles, we propose the first large-scale spatial precipitation downscaling dataset63 named RainNet, which contains more than 62, 400 pairs of high-quality low/high-resolution precip-64 itation maps for over 17 years, ready to help the evolution of deep models in spatial precipitation65 downscaling. The proposed dataset covers more than 9 million square kilometers of land area, which66 contains both wet and dry seasons and diverse meteorological phenomena. To facilitate DL/ML and67 other researchers to use RainNet, we introduce 6 most concerning indices to evaluate downscaling68 models: mesoscale peak precipitation error (MPPE), heavy rain region error (HRRE), cumulative69 precipitation mean square error (CPMSE), cluster mean distance (CMD), heavy rain transition speed70 (HRTS) and average miss moving degree (AMMD). In order to further simplify the application of in-71 dices, we abstract them into two weighted and summed metrics: Precipitation Error Measure (PEM)72 and Precipitation Dynamics Error Measure (PDEM). Unlike video super-resolution, the motion of73 the precipitation region is non-rigid (i.e., fluid), while video super-resolution mainly concerns rigid74 body motion estimation. To fully explore how to alleviate the mentioned predicament, we propose75 an implicit dynamics estimation driven downscaling deep learning model. Our model hierarchi-76 cally aligns adjacent precipitation maps, that is, implicit motion estimation, which is very simple77 but exhibits highly competitive performance. Based on meteorological science, we also proved that78 the dataset we constructed contained the full information people may need to recover the higher79 resolution observations from lower resolution ones.80
The main contributions of this paper are:81
• To the best of our knowledge, we present the first REAL (non-simulated) Large-Scale Spa-82 tial Precipitation Downscaling Dataset for deep learning;83
• We introduce 2 simple metrics to evaluate the downscaling models;84
• We propose a downscaling model with strong competitiveness. We evaluate 14 competitive85 potential solutions on the proposed dataset, and analyze the feasibility and effectiveness of86 these solutions.87
2 BACKGROUND88
At the beginning of the 19th century, geoscientists recognized that predicting the state of the atmo-89 sphere could be treated as an initial value problem of mathematical physics, wherein future weather90 is determined by integrating the governing partial differential equations, starting from the observed91 current weather. Today, this paradigm translates into solving a system of nonlinear differential92 equations at about half a billion points per time step and accounting for dynamic, thermodynamic,93 radiative, and chemical processes working on scales from hundreds of meters to thousands of kilo-94 meters and from seconds to weeks (Bauer et al. (2015)). The Navier–Stokes and mass continuity95 equations (including the effect of the Earth’s rotation), together with the first law of thermodynamics96
and the ideal gas law, represent the full set of prognostic equations in the atmosphere, describing the97 change in space and time of wind, pressure, density and temperature is described (formulas given in98 supplementary) (Bauer et al. (2015)). These equations have to be solved numerically using spatial99 and temporal discretization because of the mathematical intractability of obtaining analytical solu-100 tions, and this approximation creates a distinction between so-called resolved and unresolved scales101 of motion.102
2.1 SPATIAL DOWNSCALING OF PRECIPITATION103
The global weather forecast model, treated as a computational problem, relying on high-quality104 initial data input. The error of weather forecast would increase exponentially over time from this105 initial error of input dataset. Downscaling is one of the most important approaches to improve the106 initial input quality. Precipitation is one of the essential atmospheric variables that are related to daily107 life. It could easily be observed, by all means, e.g., gauge station, radar, and satellites. Applying108 downscaling methods to precipitation and creating high-resolution rainfall is far more meaningful109 than deriving other variables, while it is the most proper initial task to test deep learning’s power110 in geo-science. The traditional downscaling methods can be separated into dynamic and statistical111 downscaling.112
Dynamic downscaling treats the downscaling as an optimization problem constraint on the physical113 laws. The dynamic downscaling methods find the most likely precipitation over space and time114 under the pre-defined physical law. It usually takes over 6 hours to downscale a 6-hour precipitation115 scenario globally on supercomputers (Courtier et al. (1994)). As the dynamic downscaling relying116 on pre-defined known macroscopic physics, a more flexible weather downscaling framework that117
could easily blend different sources of observations and show the ability to describe more complex118 physical phenomena on different scales is desperately in need.119
Statistical downscaling is trying to speed up the dynamic downscaling process. The input of statisti-120 cal downscaling is usually dynamic model results or two different observation datasets on different121 scales. However, due to the quality of statistical downscaling results, people rarely apply statistical122 downscaling to weather forecasts. These methods are currently applied in the tasks not requir-123 ing high data quality but more qualitative understanding, e.g., climate projection, which forecasts124 the weather for hundreds of years on coarse grids and using statistical downscaling to get detailed125 knowledge of medium-scale future climate system.126
3 RAINNET: SPATIAL PRECIPITATION DOWNSCALING IMAGERY DATASET127
3.1 DATA COLLECTION AND PROCESSING128
To build up a standard realistic (non-simulated) downscaling dataset for computer vision, we129 selected the eastern coast of the United States, which covers a large region (7 million km2;130 105◦ ∼ 65◦W , 25◦ ∼ 50◦N , GNU Free Documentation License 1.2) and has a 20-year high-quality131 precipitation observations. We collected two precipitation data sources from National Stage IV QPE132 Product (StageIV (Nelson et al. (2016)); high resolution at 0.04◦ (approximately 4km), GNU Free133 Documentation License 1.2) and North American Land Data Assimilation System (NLDAS (Xia134 et al. (2012)); low resolution at 0.125◦ (approximately 13km)). StageIV is mosaicked into a na-135 tional product at National Centers for Environmental Prediction (NCEP), from the regional hourly/6-136 hourly multi-sensor (radar+gauges) precipitation analyses (MPEs) produced by the 12 River Fore-137 cast Centers over the continental United States with some manual quality control done at the River138 Forecast Centers (RFCs). NLDAS is constructed quality-controlled, spatially-and-temporally con-139 sistent datasets from the gauges and remote sensors to support modeling activities. Both products140 are hourly updated and both available from 2002 to the current age.141
In our dataset, we further selected the eastern coast region for rain season (July ∼ November,142 covering hurricane season; hurricanes pour over 10% annual rainfall in less than 10 days). We143 matched the coordinate system to the lat-lon system for both products and further labeled all the144 hurricane periods happening in the last 17 years. These heavy rain events are the largest challenge145 for weather forecasting and downscaling products. As heavy rain could stimulus a wide-spreading146 flood, which threatening local lives and arousing public evacuation. If people underestimate the147 rainfall, a potential flood would be underrated; while over-estimating the rainfall would lead to148 unnecessary evacuation orders and flood protection, which is also costly.149
3.2 DATASET STATISTICS150
At the time of this work, we have collected and processed precipitation data for the rainy season151 for 17 years from 2002 to 2018. One precipitation map pair per hour, 24 precipitation map pairs152 per day. In detail, we have collected 85 months or 62424 hours, totaling 62424 pairs of high-153 resolution and low-resolution precipitation maps. The size of the high-resolution precipitation map154 is 624 × 999, and the size of the low-resolution is 208 × 333. Various meteorological phenomena155 and precipitation conditions (e.g., hurricanes, squall lines, e.t.c.) are covered in these data. The156 precipitation map pairs in RainNet are stored in HDF5 files that make up 360 GB of disk space. We157 select 2 typical meteorological phenomena and visualize them in Fig. 1. Our data is collected from158 satellites, radars, gauge stations, e.t.c., which covers the inherent working characteristics of different159 meteorological measurement systems. Compared with traditional methods that generate data with160 different resolutions through physical model simulation, our dataset is of great help for deep models161 to learn real meteorological laws.162
3.3 DATASET ANALYSIS163
In order to help design a more appropriate and effective precipitation downscaling model, we have164 explored the property of the dataset in depth. As mentioned above, our dataset is collected from mul-165 tiple sensor sources (e.g., satellite, weather radar, e.t.c.), which makes the data show a certain extent166 of misalignment. Our efforts here are not able to vanquish the misalignment. This is an intrinsic167
problem brought by the fusion of multi-sensor meteorological data. Limited by observation meth-168 ods (e.g., satellites can only collect data when they fly over the observation area), meteorological169 data is usually temporal sparse, e.g., in our dataset, the sampling interval between two precipitation170 maps is one hour. The temporal sparse leads to serious difficulties in the utilization of precipitation171 sequences. Additionally, the movement of the precipitation position is directly related to the cloud.172 It is a fluid movement process that is completely different from the rigid body movement concerned173 in Super-Resolution. At the same time, the cloud will grow or dissipate in the process of flowing174 and even form new clouds, which further complicates the process. In the nutshell, although existed175 SR is a potential solution for downscaling, there is a big difference between the two. Especially,176 the three characteristics of downscaling mentioned above: temporal misalignment, temporal sparse,177 fluid properties, which make the dynamic estimation of precipitation more challenging.178
4 EVALUATION METRICS179
Due to the difference between downscaling and traditional figure super-resolution, the metrics that180 work well under SR tasks may not be sufficient for precipitation downscaling. By gathering the181 metrics from the meteorologic literature (the literature includes are Zhang & Yang (2004); Maraun182 et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020); Wootten et al. (2020)),183 we select and rename 6 most common metrics (a metrics may have multiple names in different184 literature) to reflect the downscaling quality: mesoscale peak precipitation error (MPPE), cumulative185 precipitation mean square error (CPMSE), heavy rain region error (HRRE) , cluster mean distance186 (CMD), heavy rain transition speed (HRTS) and average miss moving degree (AMMD).These 6187 metrics can be separated as reconstruction metrics: MPPE, HRRE, CPMSE, AMMD, and dynamic188 metrics: HRTS and CMD.189
The MPPE (mm/hour) is calculated as the difference of top quantile between the generated/real190 rainfall dataset which considering both spatial and temporal property of mesoscale meteorological191 systems, e.g., hurricane, squall. This metric is used in most of these papers (for example Zhang192 & Yang (2004); Maraun et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020);193 Wootten et al. (2020) suggest the quantile analysis to evaluate the downscaling quality).194
The CPMSE (mm2/hour2) measures the cumulative rainfall difference on each pixel over the time-195 axis of the test set, which shows the spatial reconstruction property. Similar metrics are used in196 Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) calculated as the pixel level197 difference of monthly rainfall and used in He et al. (2016) as a pixel level difference of cumulative198 rainfall with different length of record.199
The HRRE (km2) measures the difference of heavy rain coverage on each time slide between gen-200 erated and labeled test set, which shows the temporal reconstruction ability of the models. The201 AMMD (radian) measures the average angle difference between main rainfall clusters. Similar202 metrics are used in Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) as rainfall203 coverage of a indefinite number precipitation level and used in He et al. (2016); Pryor & Schoof204 (2020) as a continuous spatial analysis.205
As a single variable dataset, it is hard to evaluate the ability of different models to capture the206 precipitation dynamics when temporal information is not included (a multi-variable dataset may207 have wind speed, a typical variable representing dynamics, included). So here we introduce the208 first-order temporal and spatial variables to evaluate the dynamical property of downscaling results.209 Similar approaches are suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020).210 The CMD (km) physically compares the location difference of the main rainfall systems between211 the generated and labeled test set, which could be also understand as the RMSE of the first order212 derivative of precipitation data on spatial directions.The HRTS (km/hour) measures the difference213 between the main rainfall system moving speed between the generated and labeled test set which214 shows the ability for models to capture the dynamic property, which could be also understand as the215 RMSE of the first order derivative of precipitation data on temporal direction.Similar metrics are216 suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020) as the auto-regression217 analysis and the differential analysis.218
More details about the metrics and their equations are given in supplementary materials. One met-219 rics group (MPPE, HRRE, CPMSE, AMMD) mainly measures the rainfall deviation between the220
generated precipitation maps and GT. The other group (HRTS and CMD) mainly measures the221 dynamic deviation of generated precipitation maps. In order to further simplify the application222 of indices, we abstract them into two weighted and summed metrics: Precipitation Error Mea-223 sure (PEM) and Precipitation Dynamics Error Measure (PDEM). We first align the dimensions224 of these two groups of metrics respectively. The first group of metrics (MPPE, HRRE, CPMSE,225 AMMD) is normalized, weighted and summed to get the precipitation error measure (PEM). Ac-226 cording to Gupta et al. (1999), all the metrics are transferred to Percent Bias (PBIAS) to be suit-227 able for metrics weighting. The original definition of PBIAS is the bias divided by observation, as228 PBIAS = |Qmodel − Qobs|/|Qobs|. Here we rewrite the original metrics to PBIAS by dividing229 the metrics with annual mean observations of the original variables (AMO), as PBIASPEMi =230 |MetricsPEMi |/|AMOPEMi |,MetricsPEMi = {MPPE,HRRE,CPMSE,AMMD}. In our231 dataset, AMOPEMMPPE = 64, AMO PEM HRREM = 533, AMO PEM CPMSE = 0.64, AMO PEM AMMD = 332,232
AMOPEMHRTS = 15, AMO PEM CMD = 26. The metrics then are ensembled to a single metric233 (PEM) with equal weight, as PEM = ∑
i 0.25 · PBIASPEMi . Following the same procedure,234 we then ensemble the second group of dynamic metrics (HRTS and CMD) to a single metrics235 PDEM = ∑ i 0.5 · PBIASPDEMi .236
We also include the most common used metrics RMSE as one single metrics in our metrics list.237 RMSE could evaluate both reconstruction and dynamic property of the downscaling result.238
5 APPLICATIONS OF RAINNET IN SPATIAL PRECIPITATION DOWNSCALING239
As a potential solution, Super-Resolution (SR) frameworks are generally divided into the Single-240 Image Super-Resolution (SISR) and the Video Super-Resolution (VSR). Video Super-Resolution is241 able to leverage multi-frame information to restore images, which better matches the nature of down-242 scaling. We will demonstrate this judgment in Sec. 6.1. The VSR pipeline usually contains three243 components: deblurring, inter-frame alignment, and super-resolution. Deblurring and inter-frame244 alignment are implemented by the motion estimation module. There are four motion estimation245 frameworks: 1). RNN based (Keys (1981); Tao et al. (2017); Huang et al. (2015); Haris et al.246 (2019)); 2). Optical Flow (Xue et al. (2019)); 3). Deformable Convolution based (Tian et al. (2020);247 Xiang et al. (2020); Wang et al. (2019)); 4). Temporal Concatenation (Jo et al. (2018); Caballero248 et al. (2017); Liao et al. (2015)). In fact, there is another motion estimation scheme proposed for249 the first time in the noise reduction task (Tassano et al. (2020)), which achieves an excellent video250 noise reduction performance. Inspired by (Tassano et al. (2020)), we design an implicit dynamics251 estimation model for the spatial precipitation downscaling. It is worth mentioning that our proposed252 model and the above four frameworks together form a relatively complete candidate set of dynamic253 estimation solutions.254
Proposed Framework. As shown in Fig. 2, our framework consists of two components: Implicit255 dynamic estimation module and downscaling Backbone. These two parts are trained jointly. Suppose256 there areN adjacent low-resolution precipitation maps {IL
T−N−12 , .., ILT , ..., I L T+N−12 }. The task is to257 reconstruct the high-resolution precipitation map IHT of I L T . The implicit dynamic estimation module258 is composed of multiple vanilla networks A = {A1, ...,AN−2} (N = 5 in this paper) sharing259 weights. Each vanilla network receives three adjacent frames as input, outputs, and intermediate260 results. The intermediate result can be considered as a frame with implicit dynamic alignment. We261
concatenate all the intermediate frames as the input of the next module. The specific structure of262 the vanilla network can be found in the supplementary materials. The main task of the downscaling263 backbone is to restore the high-resolution precipitation map IHT based on the aligned intermediate264 frames. In order to make full use of multi-scale information, we use multiple Residual-in-Residual265 Dense Blocks (Wang et al. (2018)) in the network. We employ the interpolation+convolution (Odena266 et al. (2016)) as the up-sampling operator to reduce the checkerboard artifacts. After processing by267 downscaling backbone we get the final estimated HR map ÎHT .268
Model objective. The downscaling task is essentially to restore high-resolution precipitation maps.269 We learn from the super-resolution task and also applyL1 and perceptual loss (Johnson et al. (2016))270 as the training loss of our model. The model objective is shown below:271
L(̂IHT , IHT ) =‖ ÎHT − IHT ‖1 +λ ‖ φ(̂IHT )− φ(IHT ) ‖2, (1) where φ denotes the pre-trained VGG19 network (Simonyan & Zisserman (2015)), we select the272 Relu5 − 4 (without the activator (Wang et al. (2018))) as the output layer. λ is the coefficient to273 balance the loss terms. λ = 20 in our framework.274
6 EXPERIMENTAL EVALUATION275
We conduct spatial precipitation downscaling experiments to illustrate the application of our276 proposed RainNet and evaluate the effectiveness of the benchmark downscaling frameworks. Fol-277 lowing the mainstream evaluation protocol of DL/ML communities, cross-validation is employed.278 In detail, we divide the dataset into 17 parts (2002.7∼2002.11, 2003.7∼2003.11, 2004.7∼2004.11,279 2005.7∼2005.11, 2006.7∼2006.11, 2007.7∼2007.11, 2008.7∼2008.11, 2009.7∼2009.11,280 2010.7∼2010.11, 2011.7∼2011.11, 2012.7∼2012.11, 2013.7∼2013.11, 2014.7∼2014.11,281 2015.7∼2015.11, 2016.7∼2016.11, 2017.7∼2017.11, 2018.7∼2018.11) by year, and sequentially282 employ each year as the test set and the remaining 16 years as the training set, that is, 17-fold283 cross-validation. All models maintain the same training settings and hyperparameters during the284 training phase. These data cover various complicated precipitation situations such as hurricanes,285 squall lines, different levels of rain, and sunny days. It is sufficient to select the rainy season of286 the year as the test set from the perspective of meteorology, as the climate of one area is normally287 stable.288
6.1 BASELINES289
The SISR/VSR and the spatial precipitation down-290 scaling are similar to some extent, so we argue that291 the SR models can be applied to the task as the292 benchmark models. The input of SISR is a single293 image, and the model infers a high-resolution image294 from it. Its main focus is to generate high-quality295 texture details to achieve pleasing visual effects. In296 contrast, VSR models input multiple frames of im-297 ages (e.g., 3 frames, 5 frames, e.t.c.). In our experi-298 ments, we employ 5 frames. The core idea of VSR299 models is to increase the resolution by complement-300 ing texture information between different frames. It301 is worth mentioning that VSR models generally are302 equipped with a motion estimation module to alle-303 viate the challenge of object motion to inter-frame304 information registration.305
We evaluated 7 state-of-the-art SISR frameworks306 (i.e., Bicubic (Keys (1981)), SRCNN1 (Dong307 et al. (2016)), SRGAN2 (Ledig et al. (2017)),308
1https://github.com/yjn870/SRCNN-pytorch 2https://github.com/leftthomas/SRGAN
EDSR3 (Lim et al. (2017)), ESRGAN4 (Wang et al. (2018)), DBPN5 (Haris et al. (2018)),309 RCAN6 (Zhang et al. (2018)) and 5 VSR frameworks (i.e., SRGAN-V, EDSR-V, ESRGAN-V,310 RBPN7 (Haris et al. (2019)), EDVR8 (Wang et al. (2019)), of which 3 VSR methods (i.e., SRGAN-311 V, EDSR-V, ESRGAN-V) are modified from SISR. In particular, we build SRGAN-V, EDSR-V and312 ESRGAN-V by concatenating multiple frames of precipitation maps as the input of the model. In313 addition, we also evaluated the traditional statistics method Kriging (Stein (2012)), which is widely314 applied in weather forecasting. The mentioned 8 metrics are used to quantitatively evaluate the315 performance of these SR models and our method. Further, we select some disastrous weather as316 samples for qualitative analysis to test the model’s ability to learn the dynamic properties of the317 weather system. And we employ the implementation of Pytorch for Bicubic. We use 4 NVIDIA318 2080 Ti GPUs for training. We train all models with following setting. The batch size is set as 24.319 Precipitation maps are random crop into 64× 64. We employ the Adam optimizer, beta1 is 0.9, and320 beta2 is 0.99. The initial learning rate is 0.001, which is reduced to 1/10 every 50 epochs, and a total321 of 200 epochs are trained. We evaluate benchmark frameworks with 17-fold cross-validation. The322 downscaling performances are shown in Tab. 1. We divide the indicators mentioned above into two323 groups. PDEM measures the model’s ability to learn the dynamics of precipitation. PEM illustrates324 the model’s ability to reconstruct precipitation.325
From Tab. 1, we can learn that the overall performance of the VSR methods are better than SISR326 models, which shows that the dynamic properties mentioned above are extremely important for the327 downscaling model. Furthermore, it can be seen from Fig. 3 that the SISR method is clustered in the328 upper right corner of the scatter plot, and the VSR method is concentrated in the lower-left corner,329 which further shows that the dynamic properties of the VSR methods are overall better than the SISR330 methods. In addition, our method achieves the 1st best performance in RMSE, PDE, and achieve the331 second-best performance on PEM. The score shows that the implicit dynamic estimation framework332
3https://github.com/sanghyun-son/EDSR-PyTorch 4https://github.com/xinntao/ESRGAN 5https://github.com/alterzero/DBPN-Pytorch 6https://github.com/yulunzhang/RCAN 7https://github.com/alterzero/RBPN-PyTorch 8https://github.com/xinntao/EDVR
used is feasible and effective. It is worth mentioning that the traditional downscaling method Kriging333 performs better than many deep learning models (e.g., SRGAN, ESRGAN)334
6.1.1 QUALITATIVE ANALYSIS335
We visualized the tropical cyclone precipitation map of the 166th hour (6th) in September 2010336 and the high-resolution precipitation map generated by different methods. As shown in Fig. 4, the337 best perceptual effects are generated by EDVR and Our framework. Zooming in the result image,338 the precipitation maps generated by SRGAN and EDSR present obvious checkerboard artifacts.339 The reason for the checkerboard artifacts should be the relatively simple and sparse texture pattern340 in precipitation maps. The results generated by Bicubic, RCAN, Kriging, and SRCNN are over-341 smooth. DBPN even cannot reconstruct the eye of the hurricane. Especially, the result generated by342 Kriging is as fuzzy as the input LR precipitation map. In conclusion, the visual effects generated343 by the VSR methods are generally better than the SISR methods and the traditional method. From344 the perspective of quantitative and qualitative analysis, the dynamics estimation framework is very345 critical for downscaling.346
7 CONCLUSION347
In this paper, we built the first large-scale real precipitation downscaling dataset for the deep learning348 community. This dataset has 62424 pairs of HR and LR precipitation maps in total. We believe this349 dataset will further accelerate the research on precipitation downscaling. Furthermore, we analyze350 the problem in-depth and put forward three key challenges: temporal misalignment, temporal sparse,351 fluid properties. In addition, we propose an implicit dynamic estimation model to alleviate the above352 challenges. At the same time, we evaluated the mainstream SISR and VSR models and found that353 none of these models can solve RainNet’s problems well. Therefore, the downscaling task on this354 dataset is still very challenging.355
This work still remains several open problems. Currently, the data domain of this research is limited356 to the eastern U.S. In future research, we would enlarge the dataset to a larger domain. The dataset357 is only a single variable now. In future research, we may include more variables, e.g. temperature358 and wind speed.359
REFERENCES360
Rico Angell and Daniel R. Sheldon. Inferring latent velocities from weather radar data using gaus-361 sian processes. In Advances in Neural Information Processing Systems 31: Annual Conference362 on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal,363 Canada, pp. 8998–9007, 2018.364
Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction.365 Nature, 2015.366
P. Berrisford, D.P. Dee, P. Poli, R. Brugge, Mark Fielding, Manuel Fuentes, P.W. Kållberg,367 S. Kobayashi, S. Uppala, and Adrian Simmons. The era-interim archive version 2.0. 2011.368
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang,369 and Wenzhe Shi. Real-time video super-resolution with spatio-temporal networks and motion370 compensation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR371 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017.372
PHILIPPE Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational imple-373 mentation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorolog-374 ical Society, 1994.375
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep376 convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 2016.377
Marie Ekström. Metrics to identify meaningful downscaling skill in wrf simulations of intense378 rainfall events. Environmental Modelling & Software, 79:267–284, 2016.379
Brian Groenke, Luke Madaus, and Claire Monteleoni. Climalign: Unsupervised statistical down-380 scaling of climate variables via normalizing flows. CoRR, 2020.381
Hoshin Vijai Gupta, Soroosh Sorooshian, and Patrice Ogou Yapo. Status of automatic calibration382 for hydrologic models: Comparison with multilevel expert calibration. Journal of hydrologic383 engineering, 4(2):135–143, 1999.384
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Deep back-projection networks385 for super-resolution. In 2018 IEEE Conference on Computer Vision and Pattern Recognition,386 CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018.387
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Recurrent back-projection net-388 work for video super-resolution. In IEEE Conference on Computer Vision and Pattern Recogni-389 tion, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE,390 2019.391
Xiaogang He, Nathaniel W Chaney, Marc Schleiss, and Justin Sheffield. Spatial downscaling of392 precipitation using adaptable random forests. Water resources research, 2016.393
Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-394 frame super-resolution. In Advances in Neural Information Processing Systems, 2015.395
Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution396 network using dynamic upsampling filters without explicit motion compensation. In 2018 IEEE397 Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA,398 June 18-22, 2018. IEEE Computer Society, 2018.399
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and400 super-resolution. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The401 Netherlands, October 11-14, 2016, Proceedings, Part II, 2016.402
Robert Keys. Cubic convolution interpolation for digital image processing. IEEE transactions on403 acoustics, speech, and signal processing, 1981.404
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro405 Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-406 realistic single image super-resolution using a generative adversarial network. In 2017 IEEE407 Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July408 21-26, 2017. IEEE Computer Society, 2017.409
Renjie Liao, Xin Tao, Ruiyu Li, Ziyang Ma, and Jiaya Jia. Video super-resolution via deep draft-410 ensemble learning. In 2015 IEEE International Conference on Computer Vision, ICCV 2015,411 Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2015.412
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep resid-413 ual networks for single image super-resolution. In 2017 IEEE Conference on Computer Vision414 and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26,415 2017. IEEE Computer Society, 2017.416
Douglas Maraun, Martin Widmann, José M Gutiérrez, Sven Kotlarski, Richard E Chandler, Elke417 Hertig, Joanna Wibig, Radan Huth, and Renate AI Wilcke. Value: A framework to validate418 downscaling approaches for climate change studies. Earth’s Future, 3(1):1–14, 2015.419
Brian R. Nelson, Olivier P. Prat, D.-J. Seo, and Emad Habib. Assessment and Implications of420 NCEP Stage IV Quantitative Precipitation Estimates for Product Intercomparisons. Weather and421 Forecasting, 2016.422
NSF. Nsf geosciences directorate funding by institution type. AGI Report, 2020.423
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts.424 Distill, 2016.425
SC Pryor and JT Schoof. Differential credibility assessment for statistical downscaling. Journal of426 Applied Meteorology and Climatology, 59(8):1333–1349, 2020.427
Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan428 Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skillful precipitation429 nowcasting using deep generative models of radar. Nature, 2021.430
Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Car-431 valhais, et al. Deep learning and process understanding for data-driven earth system science.432 Nature, 2019.433
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image434 recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning435 Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed-436 ings, 2015.437
Michael L Stein. Interpolation of spatial data: some theory for kriging. Springer Science & Business438 Media, 2012.439
Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-440 resolution. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy,441 October 22-29, 2017. IEEE Computer Society, 2017.442
Matias Tassano, Julie Delon, and Thomas Veit. Fastdvdnet: Towards real-time deep video denois-443 ing without flow estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern444 Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.445
Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. TDAN: temporally-deformable alignment446 network for video super-resolution. In 2020 IEEE/CVF Conference on Computer Vision and447 Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.448
Mark S. Veillette, Siddharth Samsi, and Christopher J. Mattioli. SEVIR : A storm event imagery449 dataset for deep learning applications in radar and satellite meteorology. In Advances in Neu-450 ral Information Processing Systems 33: Annual Conference on Neural Information Processing451 Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.452
Daniel Walton, Neil Berg, David Pierce, Ed Maurer, Alex Hall, Yen-Heng Lin, Stefan Rahimi, and453 Dan Cayan. Understanding differences in california climate projections produced by dynamical454 and statistical downscaling. Journal of Geophysical Research: Atmospheres, 2020.455
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen456 Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings457 of the European Conference on Computer Vision (ECCV), pp. 0–0, 2018.458
Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration459 with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on460 Computer Vision and Pattern Recognition Workshops, 2019.461
BL White, A Singh, and A Albert. Downscaling numerical weather models with gans. AGUFM,462 2019.463
Adrienne M Wootten, Elias C Massoud, Agniv Sengupta, Duane E Waliser, and Huikyo Lee. The464 effect of statistical downscaling on the weighting of multi-model ensembles of precipitation. Cli-465 mate, 8(12):138, 2020.466
Youlong Xia, Kenneth Mitchell, Michael Ek, Justin Sheffield, Brian Cosgrove, Eric Wood, Lifeng467 Luo, Charles Alonge, Helin Wei, Jesse Meng, Ben Livneh, Dennis Lettenmaier, Victor Koren,468 Qingyun Duan, Kingtse Mo, Yun Fan, and David Mocko. Continental-scale water and energy469 flux analysis and validation for the north american land data assimilation system project phase470 2 (nldas-2): 1. intercomparison and application of model products. Journal of Geophysical Re-471 search: Atmospheres, 2012.472
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, and Chenliang Xu. Zooming473 slow-mo: Fast and accurate one-stage space-time video super-resolution. In 2020 IEEE/CVF474 Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June475 13-19, 2020. IEEE, 2020.476
Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T. Freeman. Video enhancement477 with task-oriented flow. Int. J. Comput. Vis., 2019.478
Xuebin Zhang and Feng Yang. Rclimdex (1.0) user manual. Climate Research Branch Environment479 Canada, 22, 2004.480
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-481 resolution using very deep residual channel attention networks. In Computer Vision - ECCV482 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part483 VII, Lecture Notes in Computer Science. Springer, 2018.484 | 1. What is the focus of the paper, and what is its contribution to the field of AI-driven meteorology?
2. What is the significance of the dataset presented in the paper, and how does it address the need for large labeled and trustworthy datasets in weather event forecasting?
3. What are the strengths of the paper, particularly in terms of its evaluation of different models on the dataset?
4. Are there any concerns regarding the appropriateness of the paper's publication in ICLR, or would a more meteorology-focused journal be a better venue for this work? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a
0.4
∘
dataset for precipitation downscaling. It further shows 14 SOTA models with/without ML for evaluation on the dataset.
Review
There is undoubtedly a need for large labeled and trustworthy datasets for segmentation, detection, tracking of large-scale weather events including extreme weather events. From that perspective, this paper is a good contribution to the field of AI-driven meteorology where such datasets are in abundant need. The evaluation of the different models on this dataset provides a good benchmark as well. This is well written and deserves to be published although I'm unsure whether it should be in ICLR. I feel that a more meteorology-focused journal is a better venue for this work. |
ICLR | Title
RainNet: A Large-Scale Imagery Dataset for Spatial Precipitation Downscaling
Abstract
Contemporary deep learning frameworks have been applied to solve meteorolog1 ical problems (e.g., front detection, synthetic radar generation, precipitation now2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation 3 downscaling is one of the most important meteorological problems. However, 4 the lack of a well-organized and annotated large-scale dataset hinders the training 5 and verification of more effective and advancing deep-learning models for precip6 itation downscaling. To alleviate these obstacles, we present the first large-scale 7 spatial precipitation downscaling dataset named RainNet, which contains more 8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over 9 17 years, ready to help the evolution of deep models in precipitation downscal10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great 12 help to improve the model generalization ability. In addition, the map pairs in 13 RainNet are organized in the form of image sequences (720 maps per month or 14 1 map/hour), showing complex physical properties, e.g., temporal misalignment, 15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are 16 specifically introduced to evaluate or verify the comprehensive performance of the 17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the 18 applications of RainNet, 14 state-of-the-art models, including deep models and 19 traditional approaches, are evaluated. To fully explore potential downscaling so20 lutions, we propose an implicit physical estimation framework to learn the above 21 characteristics. Extensive experiments demonstrate that the value of RainNet in 22 training and evaluating downscaling models. 23
N/A
Contemporary deep learning frameworks have been applied to solve meteorolog-1 ical problems (e.g., front detection, synthetic radar generation, precipitation now-2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation3 downscaling is one of the most important meteorological problems. However,4 the lack of a well-organized and annotated large-scale dataset hinders the training5 and verification of more effective and advancing deep-learning models for precip-6 itation downscaling. To alleviate these obstacles, we present the first large-scale7 spatial precipitation downscaling dataset named RainNet, which contains more8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over9 17 years, ready to help the evolution of deep models in precipitation downscal-10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var-11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great12 help to improve the model generalization ability. In addition, the map pairs in13 RainNet are organized in the form of image sequences (720 maps per month or14 1 map/hour), showing complex physical properties, e.g., temporal misalignment,15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are16 specifically introduced to evaluate or verify the comprehensive performance of the17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the18 applications of RainNet, 14 state-of-the-art models, including deep models and19 traditional approaches, are evaluated. To fully explore potential downscaling so-20 lutions, we propose an implicit physical estimation framework to learn the above21 characteristics. Extensive experiments demonstrate that the value of RainNet in22 training and evaluating downscaling models.23
1 INTRODUCTION24
Deep learning has made an enormous breakthrough in the field of computer vision, which is ex-25 tremely good at extracting valuable knowledge from numerous amounts of data. In recent years,26 with computer science development, a deluge of Earth system data is continuously being obtained,27 coming from sensors all over the earth and even in space. These ever-increasing massive amounts of28 data with different sources and structures challenge the geoscience community, which lacks practi-29 cal approaches to understand and further utilize the raw data (Reichstein et al. (2019)). Specifically,30 several preliminary works (Groenke et al. (2020); White et al. (2019); He et al. (2016); Ravuri et al.31 (2021); Angell & Sheldon (2018); Veillette et al. (2020)) try to introduce machine learning and deep32 learning frameworks to solve meteorological problems, e.g., spatial precipitation downscaling.33
In this paper, we focus on the spatial precipitation downscaling task. Spatial precipitation down-34 scaling is a procedure to infer high-resolution meteorological information from low-resolution vari-35 ables, which is one of the most important upstream components for meteorological task (Bauer et al.36 (2015)). The precision of weather and climate prediction is highly dependent on the resolution and37 reliability of the initial environmental input variables, and spatial precipitation downscaling is the38 most promising solution. The improvement of the weather/climate forecast and Geo-data quality39 saves tremendous money and lives; with the fiscal year 2020 budget over $1 billion, NSF funds40 thousands of colleges in the U.S. to research on these topics (NSF (2020)).41
Unfortunately, there are looming issues hinders the research of spatial precipitation downscaling42 in the machine learning community: 1). Lack of ”machine-learning ready” datasets. The existing43
machine-learning-based downscaling methods are only applied to ideal retrospective problems and44 verified on simulated datasets (e.g., mapping bicubic of precipitation generated by weather fore-45 cast model to original data (Berrisford et al. (2011))), which significantly weakens the credibility46 of the feasibility, practicability, and effectiveness of the methods. It is worth mentioning that the47 data obtained by the simulated degradation methods (e.g., bicubic) is completely different from the48 real data usually collected by two measurement systems (e.g., satellite and radar) with different49 precision. The lack of a well-organized and annotated large-scale dataset hinders the training and50 verification of more effective and complex deep-learning models for precipitation downscaling. 2).51 Lack of tailored metrics to evaluate machine-learning-based frameworks. Unlike deep learning (DL)52 and machine learning (ML) communities, scientists in meteorology usually employ maps/charts to53 assessing downscaling models case by case based on domain knowledge (He et al. (2016); Walton54 et al. (2020)), which hinders the application of Rainnet in DL/ML communities. For example, (He55 et al. (2016)) use log-semivariance (spatial metrics for local precipitation), quantile-quantile maps56 to analyzing the maps. 3). an efficient downscaling deep-learning framework should be established.57 Contrary to image data, this real precipitation dataset covers various types of real meteorological58 phenomena (e.g., Hurricane, Squall, e.t.c.), and shows the physical characters (e.g., temporal mis-59 alignment, temporal sparse and fluid properties, e.t.c.) that challenge the downscaling algorithms.60 Traditional computationally dense physics-driven downscaling methods are powerless to handle the61 increasing meteorological data size and flexible to multiple data sources.62
To alleviate these obstacles, we propose the first large-scale spatial precipitation downscaling dataset63 named RainNet, which contains more than 62, 400 pairs of high-quality low/high-resolution precip-64 itation maps for over 17 years, ready to help the evolution of deep models in spatial precipitation65 downscaling. The proposed dataset covers more than 9 million square kilometers of land area, which66 contains both wet and dry seasons and diverse meteorological phenomena. To facilitate DL/ML and67 other researchers to use RainNet, we introduce 6 most concerning indices to evaluate downscaling68 models: mesoscale peak precipitation error (MPPE), heavy rain region error (HRRE), cumulative69 precipitation mean square error (CPMSE), cluster mean distance (CMD), heavy rain transition speed70 (HRTS) and average miss moving degree (AMMD). In order to further simplify the application of in-71 dices, we abstract them into two weighted and summed metrics: Precipitation Error Measure (PEM)72 and Precipitation Dynamics Error Measure (PDEM). Unlike video super-resolution, the motion of73 the precipitation region is non-rigid (i.e., fluid), while video super-resolution mainly concerns rigid74 body motion estimation. To fully explore how to alleviate the mentioned predicament, we propose75 an implicit dynamics estimation driven downscaling deep learning model. Our model hierarchi-76 cally aligns adjacent precipitation maps, that is, implicit motion estimation, which is very simple77 but exhibits highly competitive performance. Based on meteorological science, we also proved that78 the dataset we constructed contained the full information people may need to recover the higher79 resolution observations from lower resolution ones.80
The main contributions of this paper are:81
• To the best of our knowledge, we present the first REAL (non-simulated) Large-Scale Spa-82 tial Precipitation Downscaling Dataset for deep learning;83
• We introduce 2 simple metrics to evaluate the downscaling models;84
• We propose a downscaling model with strong competitiveness. We evaluate 14 competitive85 potential solutions on the proposed dataset, and analyze the feasibility and effectiveness of86 these solutions.87
2 BACKGROUND88
At the beginning of the 19th century, geoscientists recognized that predicting the state of the atmo-89 sphere could be treated as an initial value problem of mathematical physics, wherein future weather90 is determined by integrating the governing partial differential equations, starting from the observed91 current weather. Today, this paradigm translates into solving a system of nonlinear differential92 equations at about half a billion points per time step and accounting for dynamic, thermodynamic,93 radiative, and chemical processes working on scales from hundreds of meters to thousands of kilo-94 meters and from seconds to weeks (Bauer et al. (2015)). The Navier–Stokes and mass continuity95 equations (including the effect of the Earth’s rotation), together with the first law of thermodynamics96
and the ideal gas law, represent the full set of prognostic equations in the atmosphere, describing the97 change in space and time of wind, pressure, density and temperature is described (formulas given in98 supplementary) (Bauer et al. (2015)). These equations have to be solved numerically using spatial99 and temporal discretization because of the mathematical intractability of obtaining analytical solu-100 tions, and this approximation creates a distinction between so-called resolved and unresolved scales101 of motion.102
2.1 SPATIAL DOWNSCALING OF PRECIPITATION103
The global weather forecast model, treated as a computational problem, relying on high-quality104 initial data input. The error of weather forecast would increase exponentially over time from this105 initial error of input dataset. Downscaling is one of the most important approaches to improve the106 initial input quality. Precipitation is one of the essential atmospheric variables that are related to daily107 life. It could easily be observed, by all means, e.g., gauge station, radar, and satellites. Applying108 downscaling methods to precipitation and creating high-resolution rainfall is far more meaningful109 than deriving other variables, while it is the most proper initial task to test deep learning’s power110 in geo-science. The traditional downscaling methods can be separated into dynamic and statistical111 downscaling.112
Dynamic downscaling treats the downscaling as an optimization problem constraint on the physical113 laws. The dynamic downscaling methods find the most likely precipitation over space and time114 under the pre-defined physical law. It usually takes over 6 hours to downscale a 6-hour precipitation115 scenario globally on supercomputers (Courtier et al. (1994)). As the dynamic downscaling relying116 on pre-defined known macroscopic physics, a more flexible weather downscaling framework that117
could easily blend different sources of observations and show the ability to describe more complex118 physical phenomena on different scales is desperately in need.119
Statistical downscaling is trying to speed up the dynamic downscaling process. The input of statisti-120 cal downscaling is usually dynamic model results or two different observation datasets on different121 scales. However, due to the quality of statistical downscaling results, people rarely apply statistical122 downscaling to weather forecasts. These methods are currently applied in the tasks not requir-123 ing high data quality but more qualitative understanding, e.g., climate projection, which forecasts124 the weather for hundreds of years on coarse grids and using statistical downscaling to get detailed125 knowledge of medium-scale future climate system.126
3 RAINNET: SPATIAL PRECIPITATION DOWNSCALING IMAGERY DATASET127
3.1 DATA COLLECTION AND PROCESSING128
To build up a standard realistic (non-simulated) downscaling dataset for computer vision, we129 selected the eastern coast of the United States, which covers a large region (7 million km2;130 105◦ ∼ 65◦W , 25◦ ∼ 50◦N , GNU Free Documentation License 1.2) and has a 20-year high-quality131 precipitation observations. We collected two precipitation data sources from National Stage IV QPE132 Product (StageIV (Nelson et al. (2016)); high resolution at 0.04◦ (approximately 4km), GNU Free133 Documentation License 1.2) and North American Land Data Assimilation System (NLDAS (Xia134 et al. (2012)); low resolution at 0.125◦ (approximately 13km)). StageIV is mosaicked into a na-135 tional product at National Centers for Environmental Prediction (NCEP), from the regional hourly/6-136 hourly multi-sensor (radar+gauges) precipitation analyses (MPEs) produced by the 12 River Fore-137 cast Centers over the continental United States with some manual quality control done at the River138 Forecast Centers (RFCs). NLDAS is constructed quality-controlled, spatially-and-temporally con-139 sistent datasets from the gauges and remote sensors to support modeling activities. Both products140 are hourly updated and both available from 2002 to the current age.141
In our dataset, we further selected the eastern coast region for rain season (July ∼ November,142 covering hurricane season; hurricanes pour over 10% annual rainfall in less than 10 days). We143 matched the coordinate system to the lat-lon system for both products and further labeled all the144 hurricane periods happening in the last 17 years. These heavy rain events are the largest challenge145 for weather forecasting and downscaling products. As heavy rain could stimulus a wide-spreading146 flood, which threatening local lives and arousing public evacuation. If people underestimate the147 rainfall, a potential flood would be underrated; while over-estimating the rainfall would lead to148 unnecessary evacuation orders and flood protection, which is also costly.149
3.2 DATASET STATISTICS150
At the time of this work, we have collected and processed precipitation data for the rainy season151 for 17 years from 2002 to 2018. One precipitation map pair per hour, 24 precipitation map pairs152 per day. In detail, we have collected 85 months or 62424 hours, totaling 62424 pairs of high-153 resolution and low-resolution precipitation maps. The size of the high-resolution precipitation map154 is 624 × 999, and the size of the low-resolution is 208 × 333. Various meteorological phenomena155 and precipitation conditions (e.g., hurricanes, squall lines, e.t.c.) are covered in these data. The156 precipitation map pairs in RainNet are stored in HDF5 files that make up 360 GB of disk space. We157 select 2 typical meteorological phenomena and visualize them in Fig. 1. Our data is collected from158 satellites, radars, gauge stations, e.t.c., which covers the inherent working characteristics of different159 meteorological measurement systems. Compared with traditional methods that generate data with160 different resolutions through physical model simulation, our dataset is of great help for deep models161 to learn real meteorological laws.162
3.3 DATASET ANALYSIS163
In order to help design a more appropriate and effective precipitation downscaling model, we have164 explored the property of the dataset in depth. As mentioned above, our dataset is collected from mul-165 tiple sensor sources (e.g., satellite, weather radar, e.t.c.), which makes the data show a certain extent166 of misalignment. Our efforts here are not able to vanquish the misalignment. This is an intrinsic167
problem brought by the fusion of multi-sensor meteorological data. Limited by observation meth-168 ods (e.g., satellites can only collect data when they fly over the observation area), meteorological169 data is usually temporal sparse, e.g., in our dataset, the sampling interval between two precipitation170 maps is one hour. The temporal sparse leads to serious difficulties in the utilization of precipitation171 sequences. Additionally, the movement of the precipitation position is directly related to the cloud.172 It is a fluid movement process that is completely different from the rigid body movement concerned173 in Super-Resolution. At the same time, the cloud will grow or dissipate in the process of flowing174 and even form new clouds, which further complicates the process. In the nutshell, although existed175 SR is a potential solution for downscaling, there is a big difference between the two. Especially,176 the three characteristics of downscaling mentioned above: temporal misalignment, temporal sparse,177 fluid properties, which make the dynamic estimation of precipitation more challenging.178
4 EVALUATION METRICS179
Due to the difference between downscaling and traditional figure super-resolution, the metrics that180 work well under SR tasks may not be sufficient for precipitation downscaling. By gathering the181 metrics from the meteorologic literature (the literature includes are Zhang & Yang (2004); Maraun182 et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020); Wootten et al. (2020)),183 we select and rename 6 most common metrics (a metrics may have multiple names in different184 literature) to reflect the downscaling quality: mesoscale peak precipitation error (MPPE), cumulative185 precipitation mean square error (CPMSE), heavy rain region error (HRRE) , cluster mean distance186 (CMD), heavy rain transition speed (HRTS) and average miss moving degree (AMMD).These 6187 metrics can be separated as reconstruction metrics: MPPE, HRRE, CPMSE, AMMD, and dynamic188 metrics: HRTS and CMD.189
The MPPE (mm/hour) is calculated as the difference of top quantile between the generated/real190 rainfall dataset which considering both spatial and temporal property of mesoscale meteorological191 systems, e.g., hurricane, squall. This metric is used in most of these papers (for example Zhang192 & Yang (2004); Maraun et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020);193 Wootten et al. (2020) suggest the quantile analysis to evaluate the downscaling quality).194
The CPMSE (mm2/hour2) measures the cumulative rainfall difference on each pixel over the time-195 axis of the test set, which shows the spatial reconstruction property. Similar metrics are used in196 Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) calculated as the pixel level197 difference of monthly rainfall and used in He et al. (2016) as a pixel level difference of cumulative198 rainfall with different length of record.199
The HRRE (km2) measures the difference of heavy rain coverage on each time slide between gen-200 erated and labeled test set, which shows the temporal reconstruction ability of the models. The201 AMMD (radian) measures the average angle difference between main rainfall clusters. Similar202 metrics are used in Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) as rainfall203 coverage of a indefinite number precipitation level and used in He et al. (2016); Pryor & Schoof204 (2020) as a continuous spatial analysis.205
As a single variable dataset, it is hard to evaluate the ability of different models to capture the206 precipitation dynamics when temporal information is not included (a multi-variable dataset may207 have wind speed, a typical variable representing dynamics, included). So here we introduce the208 first-order temporal and spatial variables to evaluate the dynamical property of downscaling results.209 Similar approaches are suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020).210 The CMD (km) physically compares the location difference of the main rainfall systems between211 the generated and labeled test set, which could be also understand as the RMSE of the first order212 derivative of precipitation data on spatial directions.The HRTS (km/hour) measures the difference213 between the main rainfall system moving speed between the generated and labeled test set which214 shows the ability for models to capture the dynamic property, which could be also understand as the215 RMSE of the first order derivative of precipitation data on temporal direction.Similar metrics are216 suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020) as the auto-regression217 analysis and the differential analysis.218
More details about the metrics and their equations are given in supplementary materials. One met-219 rics group (MPPE, HRRE, CPMSE, AMMD) mainly measures the rainfall deviation between the220
generated precipitation maps and GT. The other group (HRTS and CMD) mainly measures the221 dynamic deviation of generated precipitation maps. In order to further simplify the application222 of indices, we abstract them into two weighted and summed metrics: Precipitation Error Mea-223 sure (PEM) and Precipitation Dynamics Error Measure (PDEM). We first align the dimensions224 of these two groups of metrics respectively. The first group of metrics (MPPE, HRRE, CPMSE,225 AMMD) is normalized, weighted and summed to get the precipitation error measure (PEM). Ac-226 cording to Gupta et al. (1999), all the metrics are transferred to Percent Bias (PBIAS) to be suit-227 able for metrics weighting. The original definition of PBIAS is the bias divided by observation, as228 PBIAS = |Qmodel − Qobs|/|Qobs|. Here we rewrite the original metrics to PBIAS by dividing229 the metrics with annual mean observations of the original variables (AMO), as PBIASPEMi =230 |MetricsPEMi |/|AMOPEMi |,MetricsPEMi = {MPPE,HRRE,CPMSE,AMMD}. In our231 dataset, AMOPEMMPPE = 64, AMO PEM HRREM = 533, AMO PEM CPMSE = 0.64, AMO PEM AMMD = 332,232
AMOPEMHRTS = 15, AMO PEM CMD = 26. The metrics then are ensembled to a single metric233 (PEM) with equal weight, as PEM = ∑
i 0.25 · PBIASPEMi . Following the same procedure,234 we then ensemble the second group of dynamic metrics (HRTS and CMD) to a single metrics235 PDEM = ∑ i 0.5 · PBIASPDEMi .236
We also include the most common used metrics RMSE as one single metrics in our metrics list.237 RMSE could evaluate both reconstruction and dynamic property of the downscaling result.238
5 APPLICATIONS OF RAINNET IN SPATIAL PRECIPITATION DOWNSCALING239
As a potential solution, Super-Resolution (SR) frameworks are generally divided into the Single-240 Image Super-Resolution (SISR) and the Video Super-Resolution (VSR). Video Super-Resolution is241 able to leverage multi-frame information to restore images, which better matches the nature of down-242 scaling. We will demonstrate this judgment in Sec. 6.1. The VSR pipeline usually contains three243 components: deblurring, inter-frame alignment, and super-resolution. Deblurring and inter-frame244 alignment are implemented by the motion estimation module. There are four motion estimation245 frameworks: 1). RNN based (Keys (1981); Tao et al. (2017); Huang et al. (2015); Haris et al.246 (2019)); 2). Optical Flow (Xue et al. (2019)); 3). Deformable Convolution based (Tian et al. (2020);247 Xiang et al. (2020); Wang et al. (2019)); 4). Temporal Concatenation (Jo et al. (2018); Caballero248 et al. (2017); Liao et al. (2015)). In fact, there is another motion estimation scheme proposed for249 the first time in the noise reduction task (Tassano et al. (2020)), which achieves an excellent video250 noise reduction performance. Inspired by (Tassano et al. (2020)), we design an implicit dynamics251 estimation model for the spatial precipitation downscaling. It is worth mentioning that our proposed252 model and the above four frameworks together form a relatively complete candidate set of dynamic253 estimation solutions.254
Proposed Framework. As shown in Fig. 2, our framework consists of two components: Implicit255 dynamic estimation module and downscaling Backbone. These two parts are trained jointly. Suppose256 there areN adjacent low-resolution precipitation maps {IL
T−N−12 , .., ILT , ..., I L T+N−12 }. The task is to257 reconstruct the high-resolution precipitation map IHT of I L T . The implicit dynamic estimation module258 is composed of multiple vanilla networks A = {A1, ...,AN−2} (N = 5 in this paper) sharing259 weights. Each vanilla network receives three adjacent frames as input, outputs, and intermediate260 results. The intermediate result can be considered as a frame with implicit dynamic alignment. We261
concatenate all the intermediate frames as the input of the next module. The specific structure of262 the vanilla network can be found in the supplementary materials. The main task of the downscaling263 backbone is to restore the high-resolution precipitation map IHT based on the aligned intermediate264 frames. In order to make full use of multi-scale information, we use multiple Residual-in-Residual265 Dense Blocks (Wang et al. (2018)) in the network. We employ the interpolation+convolution (Odena266 et al. (2016)) as the up-sampling operator to reduce the checkerboard artifacts. After processing by267 downscaling backbone we get the final estimated HR map ÎHT .268
Model objective. The downscaling task is essentially to restore high-resolution precipitation maps.269 We learn from the super-resolution task and also applyL1 and perceptual loss (Johnson et al. (2016))270 as the training loss of our model. The model objective is shown below:271
L(̂IHT , IHT ) =‖ ÎHT − IHT ‖1 +λ ‖ φ(̂IHT )− φ(IHT ) ‖2, (1) where φ denotes the pre-trained VGG19 network (Simonyan & Zisserman (2015)), we select the272 Relu5 − 4 (without the activator (Wang et al. (2018))) as the output layer. λ is the coefficient to273 balance the loss terms. λ = 20 in our framework.274
6 EXPERIMENTAL EVALUATION275
We conduct spatial precipitation downscaling experiments to illustrate the application of our276 proposed RainNet and evaluate the effectiveness of the benchmark downscaling frameworks. Fol-277 lowing the mainstream evaluation protocol of DL/ML communities, cross-validation is employed.278 In detail, we divide the dataset into 17 parts (2002.7∼2002.11, 2003.7∼2003.11, 2004.7∼2004.11,279 2005.7∼2005.11, 2006.7∼2006.11, 2007.7∼2007.11, 2008.7∼2008.11, 2009.7∼2009.11,280 2010.7∼2010.11, 2011.7∼2011.11, 2012.7∼2012.11, 2013.7∼2013.11, 2014.7∼2014.11,281 2015.7∼2015.11, 2016.7∼2016.11, 2017.7∼2017.11, 2018.7∼2018.11) by year, and sequentially282 employ each year as the test set and the remaining 16 years as the training set, that is, 17-fold283 cross-validation. All models maintain the same training settings and hyperparameters during the284 training phase. These data cover various complicated precipitation situations such as hurricanes,285 squall lines, different levels of rain, and sunny days. It is sufficient to select the rainy season of286 the year as the test set from the perspective of meteorology, as the climate of one area is normally287 stable.288
6.1 BASELINES289
The SISR/VSR and the spatial precipitation down-290 scaling are similar to some extent, so we argue that291 the SR models can be applied to the task as the292 benchmark models. The input of SISR is a single293 image, and the model infers a high-resolution image294 from it. Its main focus is to generate high-quality295 texture details to achieve pleasing visual effects. In296 contrast, VSR models input multiple frames of im-297 ages (e.g., 3 frames, 5 frames, e.t.c.). In our experi-298 ments, we employ 5 frames. The core idea of VSR299 models is to increase the resolution by complement-300 ing texture information between different frames. It301 is worth mentioning that VSR models generally are302 equipped with a motion estimation module to alle-303 viate the challenge of object motion to inter-frame304 information registration.305
We evaluated 7 state-of-the-art SISR frameworks306 (i.e., Bicubic (Keys (1981)), SRCNN1 (Dong307 et al. (2016)), SRGAN2 (Ledig et al. (2017)),308
1https://github.com/yjn870/SRCNN-pytorch 2https://github.com/leftthomas/SRGAN
EDSR3 (Lim et al. (2017)), ESRGAN4 (Wang et al. (2018)), DBPN5 (Haris et al. (2018)),309 RCAN6 (Zhang et al. (2018)) and 5 VSR frameworks (i.e., SRGAN-V, EDSR-V, ESRGAN-V,310 RBPN7 (Haris et al. (2019)), EDVR8 (Wang et al. (2019)), of which 3 VSR methods (i.e., SRGAN-311 V, EDSR-V, ESRGAN-V) are modified from SISR. In particular, we build SRGAN-V, EDSR-V and312 ESRGAN-V by concatenating multiple frames of precipitation maps as the input of the model. In313 addition, we also evaluated the traditional statistics method Kriging (Stein (2012)), which is widely314 applied in weather forecasting. The mentioned 8 metrics are used to quantitatively evaluate the315 performance of these SR models and our method. Further, we select some disastrous weather as316 samples for qualitative analysis to test the model’s ability to learn the dynamic properties of the317 weather system. And we employ the implementation of Pytorch for Bicubic. We use 4 NVIDIA318 2080 Ti GPUs for training. We train all models with following setting. The batch size is set as 24.319 Precipitation maps are random crop into 64× 64. We employ the Adam optimizer, beta1 is 0.9, and320 beta2 is 0.99. The initial learning rate is 0.001, which is reduced to 1/10 every 50 epochs, and a total321 of 200 epochs are trained. We evaluate benchmark frameworks with 17-fold cross-validation. The322 downscaling performances are shown in Tab. 1. We divide the indicators mentioned above into two323 groups. PDEM measures the model’s ability to learn the dynamics of precipitation. PEM illustrates324 the model’s ability to reconstruct precipitation.325
From Tab. 1, we can learn that the overall performance of the VSR methods are better than SISR326 models, which shows that the dynamic properties mentioned above are extremely important for the327 downscaling model. Furthermore, it can be seen from Fig. 3 that the SISR method is clustered in the328 upper right corner of the scatter plot, and the VSR method is concentrated in the lower-left corner,329 which further shows that the dynamic properties of the VSR methods are overall better than the SISR330 methods. In addition, our method achieves the 1st best performance in RMSE, PDE, and achieve the331 second-best performance on PEM. The score shows that the implicit dynamic estimation framework332
3https://github.com/sanghyun-son/EDSR-PyTorch 4https://github.com/xinntao/ESRGAN 5https://github.com/alterzero/DBPN-Pytorch 6https://github.com/yulunzhang/RCAN 7https://github.com/alterzero/RBPN-PyTorch 8https://github.com/xinntao/EDVR
used is feasible and effective. It is worth mentioning that the traditional downscaling method Kriging333 performs better than many deep learning models (e.g., SRGAN, ESRGAN)334
6.1.1 QUALITATIVE ANALYSIS335
We visualized the tropical cyclone precipitation map of the 166th hour (6th) in September 2010336 and the high-resolution precipitation map generated by different methods. As shown in Fig. 4, the337 best perceptual effects are generated by EDVR and Our framework. Zooming in the result image,338 the precipitation maps generated by SRGAN and EDSR present obvious checkerboard artifacts.339 The reason for the checkerboard artifacts should be the relatively simple and sparse texture pattern340 in precipitation maps. The results generated by Bicubic, RCAN, Kriging, and SRCNN are over-341 smooth. DBPN even cannot reconstruct the eye of the hurricane. Especially, the result generated by342 Kriging is as fuzzy as the input LR precipitation map. In conclusion, the visual effects generated343 by the VSR methods are generally better than the SISR methods and the traditional method. From344 the perspective of quantitative and qualitative analysis, the dynamics estimation framework is very345 critical for downscaling.346
7 CONCLUSION347
In this paper, we built the first large-scale real precipitation downscaling dataset for the deep learning348 community. This dataset has 62424 pairs of HR and LR precipitation maps in total. We believe this349 dataset will further accelerate the research on precipitation downscaling. Furthermore, we analyze350 the problem in-depth and put forward three key challenges: temporal misalignment, temporal sparse,351 fluid properties. In addition, we propose an implicit dynamic estimation model to alleviate the above352 challenges. At the same time, we evaluated the mainstream SISR and VSR models and found that353 none of these models can solve RainNet’s problems well. Therefore, the downscaling task on this354 dataset is still very challenging.355
This work still remains several open problems. Currently, the data domain of this research is limited356 to the eastern U.S. In future research, we would enlarge the dataset to a larger domain. The dataset357 is only a single variable now. In future research, we may include more variables, e.g. temperature358 and wind speed.359
REFERENCES360
Rico Angell and Daniel R. Sheldon. Inferring latent velocities from weather radar data using gaus-361 sian processes. In Advances in Neural Information Processing Systems 31: Annual Conference362 on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal,363 Canada, pp. 8998–9007, 2018.364
Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction.365 Nature, 2015.366
P. Berrisford, D.P. Dee, P. Poli, R. Brugge, Mark Fielding, Manuel Fuentes, P.W. Kållberg,367 S. Kobayashi, S. Uppala, and Adrian Simmons. The era-interim archive version 2.0. 2011.368
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang,369 and Wenzhe Shi. Real-time video super-resolution with spatio-temporal networks and motion370 compensation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR371 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017.372
PHILIPPE Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational imple-373 mentation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorolog-374 ical Society, 1994.375
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep376 convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 2016.377
Marie Ekström. Metrics to identify meaningful downscaling skill in wrf simulations of intense378 rainfall events. Environmental Modelling & Software, 79:267–284, 2016.379
Brian Groenke, Luke Madaus, and Claire Monteleoni. Climalign: Unsupervised statistical down-380 scaling of climate variables via normalizing flows. CoRR, 2020.381
Hoshin Vijai Gupta, Soroosh Sorooshian, and Patrice Ogou Yapo. Status of automatic calibration382 for hydrologic models: Comparison with multilevel expert calibration. Journal of hydrologic383 engineering, 4(2):135–143, 1999.384
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Deep back-projection networks385 for super-resolution. In 2018 IEEE Conference on Computer Vision and Pattern Recognition,386 CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018.387
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Recurrent back-projection net-388 work for video super-resolution. In IEEE Conference on Computer Vision and Pattern Recogni-389 tion, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE,390 2019.391
Xiaogang He, Nathaniel W Chaney, Marc Schleiss, and Justin Sheffield. Spatial downscaling of392 precipitation using adaptable random forests. Water resources research, 2016.393
Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-394 frame super-resolution. In Advances in Neural Information Processing Systems, 2015.395
Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution396 network using dynamic upsampling filters without explicit motion compensation. In 2018 IEEE397 Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA,398 June 18-22, 2018. IEEE Computer Society, 2018.399
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and400 super-resolution. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The401 Netherlands, October 11-14, 2016, Proceedings, Part II, 2016.402
Robert Keys. Cubic convolution interpolation for digital image processing. IEEE transactions on403 acoustics, speech, and signal processing, 1981.404
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro405 Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-406 realistic single image super-resolution using a generative adversarial network. In 2017 IEEE407 Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July408 21-26, 2017. IEEE Computer Society, 2017.409
Renjie Liao, Xin Tao, Ruiyu Li, Ziyang Ma, and Jiaya Jia. Video super-resolution via deep draft-410 ensemble learning. In 2015 IEEE International Conference on Computer Vision, ICCV 2015,411 Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2015.412
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep resid-413 ual networks for single image super-resolution. In 2017 IEEE Conference on Computer Vision414 and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26,415 2017. IEEE Computer Society, 2017.416
Douglas Maraun, Martin Widmann, José M Gutiérrez, Sven Kotlarski, Richard E Chandler, Elke417 Hertig, Joanna Wibig, Radan Huth, and Renate AI Wilcke. Value: A framework to validate418 downscaling approaches for climate change studies. Earth’s Future, 3(1):1–14, 2015.419
Brian R. Nelson, Olivier P. Prat, D.-J. Seo, and Emad Habib. Assessment and Implications of420 NCEP Stage IV Quantitative Precipitation Estimates for Product Intercomparisons. Weather and421 Forecasting, 2016.422
NSF. Nsf geosciences directorate funding by institution type. AGI Report, 2020.423
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts.424 Distill, 2016.425
SC Pryor and JT Schoof. Differential credibility assessment for statistical downscaling. Journal of426 Applied Meteorology and Climatology, 59(8):1333–1349, 2020.427
Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan428 Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skillful precipitation429 nowcasting using deep generative models of radar. Nature, 2021.430
Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Car-431 valhais, et al. Deep learning and process understanding for data-driven earth system science.432 Nature, 2019.433
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image434 recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning435 Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed-436 ings, 2015.437
Michael L Stein. Interpolation of spatial data: some theory for kriging. Springer Science & Business438 Media, 2012.439
Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-440 resolution. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy,441 October 22-29, 2017. IEEE Computer Society, 2017.442
Matias Tassano, Julie Delon, and Thomas Veit. Fastdvdnet: Towards real-time deep video denois-443 ing without flow estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern444 Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.445
Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. TDAN: temporally-deformable alignment446 network for video super-resolution. In 2020 IEEE/CVF Conference on Computer Vision and447 Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.448
Mark S. Veillette, Siddharth Samsi, and Christopher J. Mattioli. SEVIR : A storm event imagery449 dataset for deep learning applications in radar and satellite meteorology. In Advances in Neu-450 ral Information Processing Systems 33: Annual Conference on Neural Information Processing451 Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.452
Daniel Walton, Neil Berg, David Pierce, Ed Maurer, Alex Hall, Yen-Heng Lin, Stefan Rahimi, and453 Dan Cayan. Understanding differences in california climate projections produced by dynamical454 and statistical downscaling. Journal of Geophysical Research: Atmospheres, 2020.455
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen456 Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings457 of the European Conference on Computer Vision (ECCV), pp. 0–0, 2018.458
Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration459 with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on460 Computer Vision and Pattern Recognition Workshops, 2019.461
BL White, A Singh, and A Albert. Downscaling numerical weather models with gans. AGUFM,462 2019.463
Adrienne M Wootten, Elias C Massoud, Agniv Sengupta, Duane E Waliser, and Huikyo Lee. The464 effect of statistical downscaling on the weighting of multi-model ensembles of precipitation. Cli-465 mate, 8(12):138, 2020.466
Youlong Xia, Kenneth Mitchell, Michael Ek, Justin Sheffield, Brian Cosgrove, Eric Wood, Lifeng467 Luo, Charles Alonge, Helin Wei, Jesse Meng, Ben Livneh, Dennis Lettenmaier, Victor Koren,468 Qingyun Duan, Kingtse Mo, Yun Fan, and David Mocko. Continental-scale water and energy469 flux analysis and validation for the north american land data assimilation system project phase470 2 (nldas-2): 1. intercomparison and application of model products. Journal of Geophysical Re-471 search: Atmospheres, 2012.472
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, and Chenliang Xu. Zooming473 slow-mo: Fast and accurate one-stage space-time video super-resolution. In 2020 IEEE/CVF474 Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June475 13-19, 2020. IEEE, 2020.476
Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T. Freeman. Video enhancement477 with task-oriented flow. Int. J. Comput. Vis., 2019.478
Xuebin Zhang and Feng Yang. Rclimdex (1.0) user manual. Climate Research Branch Environment479 Canada, 22, 2004.480
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-481 resolution using very deep residual channel attention networks. In Computer Vision - ECCV482 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part483 VII, Lecture Notes in Computer Science. Springer, 2018.484 | 1. What are the strengths and weaknesses of the proposed RainNet dataset?
2. How does the author justify the choice of datasets used to create RainNet?
3. What are the limitations of the NLDAS and Stage IV datasets used in creating RainNet?
4. Why did the authors choose to use a 17-fold cross-validation procedure, and how did they determine that it was sufficient?
5. How do the results reported in Table 1 compare to downsampling the high-resolution data?
6. What other approaches could have been explored for precipitation downscaling, and how do they compare to the proposed method?
7. How does the author support the claim that the error of weather forecast would increase exponentially over time from the initial error of the input dataset?
8. What minor errors or inconsistencies can be found throughout the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a dataset called RainNet for studying precipitation downscaling. RainNet consists of ~62k pairs of low-res/high-res precipitation maps from various meteorological events over the east coast of the US over 17 years. This dataset was created from two existing products: NLDAS and the NCEP's Stage IV 4km gridded precipitation data. Further, the paper proposes two metrics, Precipitation Error Measure (PEM) and Precipitation Dynamics Error Measure (PDEM). Finally, the paper compares several superresolution methods from computer vision literature, traditional statistical approaches to downscaling, as well as the proposed architecture/method on the proposed dataset to form a set of benchmark results.
Review
The topic of the paper is relevant -- a large scale dataset of gridded, paired, high-resolution and low-resolution precipitation measurements would be both interesting and highly useful for exploring deep learning approaches to for earth science applications. However, this paper is lacking in several key dimensions:
First, the paper does not adequately describe the proposed dataset itself:
From Section 3.1 it seems that the data comes from the NLDAS dataset and the NCEP Stage IV 4km gridded precipitation data. More details are needed here, for example, the NLDAS dataset has two versions (001 having been discontinued in April 2020). Which products exactly were used? What are the limitations of these datasets (e.g. NLDAS is a reanalysis dataset)?
The resolution of the high-resolution data is 0.04 degrees and the resolution of the low-resolution data is 0.125 degrees, however the high-resolution and low resolution images are exactly 3x different, which implies that a resampling step was used at some point and should be detailed. Why were image sizes of 624x999 and 208x333 used? Practically, in most deep learning cases these will need to be cropped/resized anyway to accommodate downsampling.
Figure 1 seems to show that the low resolution data is clipped to the coastline? If this is the case, then it would present a serious limitation of the dataset as the high-resolution data looks like it is not clipped to the coast.
The paper argues that this is the first real dataset for studying the downscaling task, but it does not argue why it is insufficient to simply downsample the high-resolution data to get low-resolution data. If it is the case that there is a large difference between the downsampled high-resolution data and the low-resolution data, then this should be discussed.
Second, the reported results (Table 1) are from a 17 fold cross-validation procedure over time, however are shown without standard deviations or confidence intervals. The difference between some methods is very small, and could easily be insignificant. Deep learning approaches can have large differences in performance with everything but random number seed used in training held the same, therefore it is important to at least report standard deviation over runs. As is, it is difficult to draw conclusions about the quality of the different approaches.
Finally, there are many claims and details throughout the paper that need to be cited/addressed. For example:
The first sentence of Section 2 is incorrect. Bauer et al. (2015) explain that the beginning of the 20th century (not the beginning of the 19th century as written in line 89) is when "geoscientists recognized that predicting the state of the atmosphere could be treated as an initial value problem...". The Navier-Stokes equations were only discovered in the mid-19th century.
(Line 252) "It is worth mentioning that our proposed model and the above four frameworks together form a relatively complete candidate set of dynamic estimation solutions." -- this is a strong, unnecessary, claim without any support -- how do the proposed model and four frameworks form a relatively complete set of dynamic estimation solutions?
Lines 131 and 133 includes "GNU Free Documentation License 1.2" completely out of context?
"The precision of weather and climate prediction is highly dependent on the resolution and reliability of the initial environmental input variables, and spatial precipitation downscaling is the most promising solution." -- this is a bold claim and needs a citation.
"The error of weather forecast would increase exponentially over time from this initial error of input dataset." (lines 105-106) the error does increase exponentially over time.
Minor points:
"e.t.c." --> "etc." throughout
"Unfortunately, there are looming issues hinders the" --> "Unfortunately, there are looming issues that hinder the"
"Rainnet" --> "RainNet" throughout
"quantile-quantile maps to analyzing the maps." (lines 56-57) doesn't make sense
"Hurricane, Squall" --> "hurricane, squall" (line 59)
"alleviate these obstacles" consider rewording "alleviate these challenges", "overcome these obstacles"
"REAL" --> "real" (line 82)
Sentence incomplete (line 104-105)
Caption of Figure 1 says, "To learn more about the dataset, please visit our project website (coming soon) and supplementary material." The main contribution of the paper, however, is this dataset, therefore the caption should reference the paper.
Line 219 says more details about the metrics is given in the supplementary materials, however the supplementary materials first copies the text from the main paper word-for-word (including the sentence about referencing the supplementary material...)
The list of dates in line 279-282 should be described differently.
In the Proposed Framework subsection (line 255) H, L, and T are not described.
Consider using font formatting instead of coloring in Table 1 to make the table more easily interpreted by readers with colorblindness. The red highlighting, in particular, will be difficult for readers with the most common form of colorblindness.
Figure 4, caption is potentially unfinished. |
ICLR | Title
RainNet: A Large-Scale Imagery Dataset for Spatial Precipitation Downscaling
Abstract
Contemporary deep learning frameworks have been applied to solve meteorolog1 ical problems (e.g., front detection, synthetic radar generation, precipitation now2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation 3 downscaling is one of the most important meteorological problems. However, 4 the lack of a well-organized and annotated large-scale dataset hinders the training 5 and verification of more effective and advancing deep-learning models for precip6 itation downscaling. To alleviate these obstacles, we present the first large-scale 7 spatial precipitation downscaling dataset named RainNet, which contains more 8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over 9 17 years, ready to help the evolution of deep models in precipitation downscal10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great 12 help to improve the model generalization ability. In addition, the map pairs in 13 RainNet are organized in the form of image sequences (720 maps per month or 14 1 map/hour), showing complex physical properties, e.g., temporal misalignment, 15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are 16 specifically introduced to evaluate or verify the comprehensive performance of the 17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the 18 applications of RainNet, 14 state-of-the-art models, including deep models and 19 traditional approaches, are evaluated. To fully explore potential downscaling so20 lutions, we propose an implicit physical estimation framework to learn the above 21 characteristics. Extensive experiments demonstrate that the value of RainNet in 22 training and evaluating downscaling models. 23
N/A
Contemporary deep learning frameworks have been applied to solve meteorolog-1 ical problems (e.g., front detection, synthetic radar generation, precipitation now-2 casting, e.t.c.) and have achieved highly promising results. Spatial precipitation3 downscaling is one of the most important meteorological problems. However,4 the lack of a well-organized and annotated large-scale dataset hinders the training5 and verification of more effective and advancing deep-learning models for precip-6 itation downscaling. To alleviate these obstacles, we present the first large-scale7 spatial precipitation downscaling dataset named RainNet, which contains more8 than 62, 400 pairs of high-quality low/high-resolution precipitation maps for over9 17 years, ready to help the evolution of deep models in precipitation downscal-10 ing. Specifically, the precipitation maps carefully collected in RainNet cover var-11 ious meteorological phenomena (e.g., hurricane, squall, e.t.c.), which is of great12 help to improve the model generalization ability. In addition, the map pairs in13 RainNet are organized in the form of image sequences (720 maps per month or14 1 map/hour), showing complex physical properties, e.g., temporal misalignment,15 temporal sparse, and fluid properties. Two machine-learning-oriented metrics are16 specifically introduced to evaluate or verify the comprehensive performance of the17 trained model, (e.g., prediction maps reconstruction accuracy). To illustrate the18 applications of RainNet, 14 state-of-the-art models, including deep models and19 traditional approaches, are evaluated. To fully explore potential downscaling so-20 lutions, we propose an implicit physical estimation framework to learn the above21 characteristics. Extensive experiments demonstrate that the value of RainNet in22 training and evaluating downscaling models.23
1 INTRODUCTION24
Deep learning has made an enormous breakthrough in the field of computer vision, which is ex-25 tremely good at extracting valuable knowledge from numerous amounts of data. In recent years,26 with computer science development, a deluge of Earth system data is continuously being obtained,27 coming from sensors all over the earth and even in space. These ever-increasing massive amounts of28 data with different sources and structures challenge the geoscience community, which lacks practi-29 cal approaches to understand and further utilize the raw data (Reichstein et al. (2019)). Specifically,30 several preliminary works (Groenke et al. (2020); White et al. (2019); He et al. (2016); Ravuri et al.31 (2021); Angell & Sheldon (2018); Veillette et al. (2020)) try to introduce machine learning and deep32 learning frameworks to solve meteorological problems, e.g., spatial precipitation downscaling.33
In this paper, we focus on the spatial precipitation downscaling task. Spatial precipitation down-34 scaling is a procedure to infer high-resolution meteorological information from low-resolution vari-35 ables, which is one of the most important upstream components for meteorological task (Bauer et al.36 (2015)). The precision of weather and climate prediction is highly dependent on the resolution and37 reliability of the initial environmental input variables, and spatial precipitation downscaling is the38 most promising solution. The improvement of the weather/climate forecast and Geo-data quality39 saves tremendous money and lives; with the fiscal year 2020 budget over $1 billion, NSF funds40 thousands of colleges in the U.S. to research on these topics (NSF (2020)).41
Unfortunately, there are looming issues hinders the research of spatial precipitation downscaling42 in the machine learning community: 1). Lack of ”machine-learning ready” datasets. The existing43
machine-learning-based downscaling methods are only applied to ideal retrospective problems and44 verified on simulated datasets (e.g., mapping bicubic of precipitation generated by weather fore-45 cast model to original data (Berrisford et al. (2011))), which significantly weakens the credibility46 of the feasibility, practicability, and effectiveness of the methods. It is worth mentioning that the47 data obtained by the simulated degradation methods (e.g., bicubic) is completely different from the48 real data usually collected by two measurement systems (e.g., satellite and radar) with different49 precision. The lack of a well-organized and annotated large-scale dataset hinders the training and50 verification of more effective and complex deep-learning models for precipitation downscaling. 2).51 Lack of tailored metrics to evaluate machine-learning-based frameworks. Unlike deep learning (DL)52 and machine learning (ML) communities, scientists in meteorology usually employ maps/charts to53 assessing downscaling models case by case based on domain knowledge (He et al. (2016); Walton54 et al. (2020)), which hinders the application of Rainnet in DL/ML communities. For example, (He55 et al. (2016)) use log-semivariance (spatial metrics for local precipitation), quantile-quantile maps56 to analyzing the maps. 3). an efficient downscaling deep-learning framework should be established.57 Contrary to image data, this real precipitation dataset covers various types of real meteorological58 phenomena (e.g., Hurricane, Squall, e.t.c.), and shows the physical characters (e.g., temporal mis-59 alignment, temporal sparse and fluid properties, e.t.c.) that challenge the downscaling algorithms.60 Traditional computationally dense physics-driven downscaling methods are powerless to handle the61 increasing meteorological data size and flexible to multiple data sources.62
To alleviate these obstacles, we propose the first large-scale spatial precipitation downscaling dataset63 named RainNet, which contains more than 62, 400 pairs of high-quality low/high-resolution precip-64 itation maps for over 17 years, ready to help the evolution of deep models in spatial precipitation65 downscaling. The proposed dataset covers more than 9 million square kilometers of land area, which66 contains both wet and dry seasons and diverse meteorological phenomena. To facilitate DL/ML and67 other researchers to use RainNet, we introduce 6 most concerning indices to evaluate downscaling68 models: mesoscale peak precipitation error (MPPE), heavy rain region error (HRRE), cumulative69 precipitation mean square error (CPMSE), cluster mean distance (CMD), heavy rain transition speed70 (HRTS) and average miss moving degree (AMMD). In order to further simplify the application of in-71 dices, we abstract them into two weighted and summed metrics: Precipitation Error Measure (PEM)72 and Precipitation Dynamics Error Measure (PDEM). Unlike video super-resolution, the motion of73 the precipitation region is non-rigid (i.e., fluid), while video super-resolution mainly concerns rigid74 body motion estimation. To fully explore how to alleviate the mentioned predicament, we propose75 an implicit dynamics estimation driven downscaling deep learning model. Our model hierarchi-76 cally aligns adjacent precipitation maps, that is, implicit motion estimation, which is very simple77 but exhibits highly competitive performance. Based on meteorological science, we also proved that78 the dataset we constructed contained the full information people may need to recover the higher79 resolution observations from lower resolution ones.80
The main contributions of this paper are:81
• To the best of our knowledge, we present the first REAL (non-simulated) Large-Scale Spa-82 tial Precipitation Downscaling Dataset for deep learning;83
• We introduce 2 simple metrics to evaluate the downscaling models;84
• We propose a downscaling model with strong competitiveness. We evaluate 14 competitive85 potential solutions on the proposed dataset, and analyze the feasibility and effectiveness of86 these solutions.87
2 BACKGROUND88
At the beginning of the 19th century, geoscientists recognized that predicting the state of the atmo-89 sphere could be treated as an initial value problem of mathematical physics, wherein future weather90 is determined by integrating the governing partial differential equations, starting from the observed91 current weather. Today, this paradigm translates into solving a system of nonlinear differential92 equations at about half a billion points per time step and accounting for dynamic, thermodynamic,93 radiative, and chemical processes working on scales from hundreds of meters to thousands of kilo-94 meters and from seconds to weeks (Bauer et al. (2015)). The Navier–Stokes and mass continuity95 equations (including the effect of the Earth’s rotation), together with the first law of thermodynamics96
and the ideal gas law, represent the full set of prognostic equations in the atmosphere, describing the97 change in space and time of wind, pressure, density and temperature is described (formulas given in98 supplementary) (Bauer et al. (2015)). These equations have to be solved numerically using spatial99 and temporal discretization because of the mathematical intractability of obtaining analytical solu-100 tions, and this approximation creates a distinction between so-called resolved and unresolved scales101 of motion.102
2.1 SPATIAL DOWNSCALING OF PRECIPITATION103
The global weather forecast model, treated as a computational problem, relying on high-quality104 initial data input. The error of weather forecast would increase exponentially over time from this105 initial error of input dataset. Downscaling is one of the most important approaches to improve the106 initial input quality. Precipitation is one of the essential atmospheric variables that are related to daily107 life. It could easily be observed, by all means, e.g., gauge station, radar, and satellites. Applying108 downscaling methods to precipitation and creating high-resolution rainfall is far more meaningful109 than deriving other variables, while it is the most proper initial task to test deep learning’s power110 in geo-science. The traditional downscaling methods can be separated into dynamic and statistical111 downscaling.112
Dynamic downscaling treats the downscaling as an optimization problem constraint on the physical113 laws. The dynamic downscaling methods find the most likely precipitation over space and time114 under the pre-defined physical law. It usually takes over 6 hours to downscale a 6-hour precipitation115 scenario globally on supercomputers (Courtier et al. (1994)). As the dynamic downscaling relying116 on pre-defined known macroscopic physics, a more flexible weather downscaling framework that117
could easily blend different sources of observations and show the ability to describe more complex118 physical phenomena on different scales is desperately in need.119
Statistical downscaling is trying to speed up the dynamic downscaling process. The input of statisti-120 cal downscaling is usually dynamic model results or two different observation datasets on different121 scales. However, due to the quality of statistical downscaling results, people rarely apply statistical122 downscaling to weather forecasts. These methods are currently applied in the tasks not requir-123 ing high data quality but more qualitative understanding, e.g., climate projection, which forecasts124 the weather for hundreds of years on coarse grids and using statistical downscaling to get detailed125 knowledge of medium-scale future climate system.126
3 RAINNET: SPATIAL PRECIPITATION DOWNSCALING IMAGERY DATASET127
3.1 DATA COLLECTION AND PROCESSING128
To build up a standard realistic (non-simulated) downscaling dataset for computer vision, we129 selected the eastern coast of the United States, which covers a large region (7 million km2;130 105◦ ∼ 65◦W , 25◦ ∼ 50◦N , GNU Free Documentation License 1.2) and has a 20-year high-quality131 precipitation observations. We collected two precipitation data sources from National Stage IV QPE132 Product (StageIV (Nelson et al. (2016)); high resolution at 0.04◦ (approximately 4km), GNU Free133 Documentation License 1.2) and North American Land Data Assimilation System (NLDAS (Xia134 et al. (2012)); low resolution at 0.125◦ (approximately 13km)). StageIV is mosaicked into a na-135 tional product at National Centers for Environmental Prediction (NCEP), from the regional hourly/6-136 hourly multi-sensor (radar+gauges) precipitation analyses (MPEs) produced by the 12 River Fore-137 cast Centers over the continental United States with some manual quality control done at the River138 Forecast Centers (RFCs). NLDAS is constructed quality-controlled, spatially-and-temporally con-139 sistent datasets from the gauges and remote sensors to support modeling activities. Both products140 are hourly updated and both available from 2002 to the current age.141
In our dataset, we further selected the eastern coast region for rain season (July ∼ November,142 covering hurricane season; hurricanes pour over 10% annual rainfall in less than 10 days). We143 matched the coordinate system to the lat-lon system for both products and further labeled all the144 hurricane periods happening in the last 17 years. These heavy rain events are the largest challenge145 for weather forecasting and downscaling products. As heavy rain could stimulus a wide-spreading146 flood, which threatening local lives and arousing public evacuation. If people underestimate the147 rainfall, a potential flood would be underrated; while over-estimating the rainfall would lead to148 unnecessary evacuation orders and flood protection, which is also costly.149
3.2 DATASET STATISTICS150
At the time of this work, we have collected and processed precipitation data for the rainy season151 for 17 years from 2002 to 2018. One precipitation map pair per hour, 24 precipitation map pairs152 per day. In detail, we have collected 85 months or 62424 hours, totaling 62424 pairs of high-153 resolution and low-resolution precipitation maps. The size of the high-resolution precipitation map154 is 624 × 999, and the size of the low-resolution is 208 × 333. Various meteorological phenomena155 and precipitation conditions (e.g., hurricanes, squall lines, e.t.c.) are covered in these data. The156 precipitation map pairs in RainNet are stored in HDF5 files that make up 360 GB of disk space. We157 select 2 typical meteorological phenomena and visualize them in Fig. 1. Our data is collected from158 satellites, radars, gauge stations, e.t.c., which covers the inherent working characteristics of different159 meteorological measurement systems. Compared with traditional methods that generate data with160 different resolutions through physical model simulation, our dataset is of great help for deep models161 to learn real meteorological laws.162
3.3 DATASET ANALYSIS163
In order to help design a more appropriate and effective precipitation downscaling model, we have164 explored the property of the dataset in depth. As mentioned above, our dataset is collected from mul-165 tiple sensor sources (e.g., satellite, weather radar, e.t.c.), which makes the data show a certain extent166 of misalignment. Our efforts here are not able to vanquish the misalignment. This is an intrinsic167
problem brought by the fusion of multi-sensor meteorological data. Limited by observation meth-168 ods (e.g., satellites can only collect data when they fly over the observation area), meteorological169 data is usually temporal sparse, e.g., in our dataset, the sampling interval between two precipitation170 maps is one hour. The temporal sparse leads to serious difficulties in the utilization of precipitation171 sequences. Additionally, the movement of the precipitation position is directly related to the cloud.172 It is a fluid movement process that is completely different from the rigid body movement concerned173 in Super-Resolution. At the same time, the cloud will grow or dissipate in the process of flowing174 and even form new clouds, which further complicates the process. In the nutshell, although existed175 SR is a potential solution for downscaling, there is a big difference between the two. Especially,176 the three characteristics of downscaling mentioned above: temporal misalignment, temporal sparse,177 fluid properties, which make the dynamic estimation of precipitation more challenging.178
4 EVALUATION METRICS179
Due to the difference between downscaling and traditional figure super-resolution, the metrics that180 work well under SR tasks may not be sufficient for precipitation downscaling. By gathering the181 metrics from the meteorologic literature (the literature includes are Zhang & Yang (2004); Maraun182 et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020); Wootten et al. (2020)),183 we select and rename 6 most common metrics (a metrics may have multiple names in different184 literature) to reflect the downscaling quality: mesoscale peak precipitation error (MPPE), cumulative185 precipitation mean square error (CPMSE), heavy rain region error (HRRE) , cluster mean distance186 (CMD), heavy rain transition speed (HRTS) and average miss moving degree (AMMD).These 6187 metrics can be separated as reconstruction metrics: MPPE, HRRE, CPMSE, AMMD, and dynamic188 metrics: HRTS and CMD.189
The MPPE (mm/hour) is calculated as the difference of top quantile between the generated/real190 rainfall dataset which considering both spatial and temporal property of mesoscale meteorological191 systems, e.g., hurricane, squall. This metric is used in most of these papers (for example Zhang192 & Yang (2004); Maraun et al. (2015); Ekström (2016); He et al. (2016); Pryor & Schoof (2020);193 Wootten et al. (2020) suggest the quantile analysis to evaluate the downscaling quality).194
The CPMSE (mm2/hour2) measures the cumulative rainfall difference on each pixel over the time-195 axis of the test set, which shows the spatial reconstruction property. Similar metrics are used in196 Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) calculated as the pixel level197 difference of monthly rainfall and used in He et al. (2016) as a pixel level difference of cumulative198 rainfall with different length of record.199
The HRRE (km2) measures the difference of heavy rain coverage on each time slide between gen-200 erated and labeled test set, which shows the temporal reconstruction ability of the models. The201 AMMD (radian) measures the average angle difference between main rainfall clusters. Similar202 metrics are used in Zhang & Yang (2004); Maraun et al. (2015); Wootten et al. (2020) as rainfall203 coverage of a indefinite number precipitation level and used in He et al. (2016); Pryor & Schoof204 (2020) as a continuous spatial analysis.205
As a single variable dataset, it is hard to evaluate the ability of different models to capture the206 precipitation dynamics when temporal information is not included (a multi-variable dataset may207 have wind speed, a typical variable representing dynamics, included). So here we introduce the208 first-order temporal and spatial variables to evaluate the dynamical property of downscaling results.209 Similar approaches are suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020).210 The CMD (km) physically compares the location difference of the main rainfall systems between211 the generated and labeled test set, which could be also understand as the RMSE of the first order212 derivative of precipitation data on spatial directions.The HRTS (km/hour) measures the difference213 between the main rainfall system moving speed between the generated and labeled test set which214 shows the ability for models to capture the dynamic property, which could be also understand as the215 RMSE of the first order derivative of precipitation data on temporal direction.Similar metrics are216 suggested in Maraun et al. (2015); Ekström (2016); Pryor & Schoof (2020) as the auto-regression217 analysis and the differential analysis.218
More details about the metrics and their equations are given in supplementary materials. One met-219 rics group (MPPE, HRRE, CPMSE, AMMD) mainly measures the rainfall deviation between the220
generated precipitation maps and GT. The other group (HRTS and CMD) mainly measures the221 dynamic deviation of generated precipitation maps. In order to further simplify the application222 of indices, we abstract them into two weighted and summed metrics: Precipitation Error Mea-223 sure (PEM) and Precipitation Dynamics Error Measure (PDEM). We first align the dimensions224 of these two groups of metrics respectively. The first group of metrics (MPPE, HRRE, CPMSE,225 AMMD) is normalized, weighted and summed to get the precipitation error measure (PEM). Ac-226 cording to Gupta et al. (1999), all the metrics are transferred to Percent Bias (PBIAS) to be suit-227 able for metrics weighting. The original definition of PBIAS is the bias divided by observation, as228 PBIAS = |Qmodel − Qobs|/|Qobs|. Here we rewrite the original metrics to PBIAS by dividing229 the metrics with annual mean observations of the original variables (AMO), as PBIASPEMi =230 |MetricsPEMi |/|AMOPEMi |,MetricsPEMi = {MPPE,HRRE,CPMSE,AMMD}. In our231 dataset, AMOPEMMPPE = 64, AMO PEM HRREM = 533, AMO PEM CPMSE = 0.64, AMO PEM AMMD = 332,232
AMOPEMHRTS = 15, AMO PEM CMD = 26. The metrics then are ensembled to a single metric233 (PEM) with equal weight, as PEM = ∑
i 0.25 · PBIASPEMi . Following the same procedure,234 we then ensemble the second group of dynamic metrics (HRTS and CMD) to a single metrics235 PDEM = ∑ i 0.5 · PBIASPDEMi .236
We also include the most common used metrics RMSE as one single metrics in our metrics list.237 RMSE could evaluate both reconstruction and dynamic property of the downscaling result.238
5 APPLICATIONS OF RAINNET IN SPATIAL PRECIPITATION DOWNSCALING239
As a potential solution, Super-Resolution (SR) frameworks are generally divided into the Single-240 Image Super-Resolution (SISR) and the Video Super-Resolution (VSR). Video Super-Resolution is241 able to leverage multi-frame information to restore images, which better matches the nature of down-242 scaling. We will demonstrate this judgment in Sec. 6.1. The VSR pipeline usually contains three243 components: deblurring, inter-frame alignment, and super-resolution. Deblurring and inter-frame244 alignment are implemented by the motion estimation module. There are four motion estimation245 frameworks: 1). RNN based (Keys (1981); Tao et al. (2017); Huang et al. (2015); Haris et al.246 (2019)); 2). Optical Flow (Xue et al. (2019)); 3). Deformable Convolution based (Tian et al. (2020);247 Xiang et al. (2020); Wang et al. (2019)); 4). Temporal Concatenation (Jo et al. (2018); Caballero248 et al. (2017); Liao et al. (2015)). In fact, there is another motion estimation scheme proposed for249 the first time in the noise reduction task (Tassano et al. (2020)), which achieves an excellent video250 noise reduction performance. Inspired by (Tassano et al. (2020)), we design an implicit dynamics251 estimation model for the spatial precipitation downscaling. It is worth mentioning that our proposed252 model and the above four frameworks together form a relatively complete candidate set of dynamic253 estimation solutions.254
Proposed Framework. As shown in Fig. 2, our framework consists of two components: Implicit255 dynamic estimation module and downscaling Backbone. These two parts are trained jointly. Suppose256 there areN adjacent low-resolution precipitation maps {IL
T−N−12 , .., ILT , ..., I L T+N−12 }. The task is to257 reconstruct the high-resolution precipitation map IHT of I L T . The implicit dynamic estimation module258 is composed of multiple vanilla networks A = {A1, ...,AN−2} (N = 5 in this paper) sharing259 weights. Each vanilla network receives three adjacent frames as input, outputs, and intermediate260 results. The intermediate result can be considered as a frame with implicit dynamic alignment. We261
concatenate all the intermediate frames as the input of the next module. The specific structure of262 the vanilla network can be found in the supplementary materials. The main task of the downscaling263 backbone is to restore the high-resolution precipitation map IHT based on the aligned intermediate264 frames. In order to make full use of multi-scale information, we use multiple Residual-in-Residual265 Dense Blocks (Wang et al. (2018)) in the network. We employ the interpolation+convolution (Odena266 et al. (2016)) as the up-sampling operator to reduce the checkerboard artifacts. After processing by267 downscaling backbone we get the final estimated HR map ÎHT .268
Model objective. The downscaling task is essentially to restore high-resolution precipitation maps.269 We learn from the super-resolution task and also applyL1 and perceptual loss (Johnson et al. (2016))270 as the training loss of our model. The model objective is shown below:271
L(̂IHT , IHT ) =‖ ÎHT − IHT ‖1 +λ ‖ φ(̂IHT )− φ(IHT ) ‖2, (1) where φ denotes the pre-trained VGG19 network (Simonyan & Zisserman (2015)), we select the272 Relu5 − 4 (without the activator (Wang et al. (2018))) as the output layer. λ is the coefficient to273 balance the loss terms. λ = 20 in our framework.274
6 EXPERIMENTAL EVALUATION275
We conduct spatial precipitation downscaling experiments to illustrate the application of our276 proposed RainNet and evaluate the effectiveness of the benchmark downscaling frameworks. Fol-277 lowing the mainstream evaluation protocol of DL/ML communities, cross-validation is employed.278 In detail, we divide the dataset into 17 parts (2002.7∼2002.11, 2003.7∼2003.11, 2004.7∼2004.11,279 2005.7∼2005.11, 2006.7∼2006.11, 2007.7∼2007.11, 2008.7∼2008.11, 2009.7∼2009.11,280 2010.7∼2010.11, 2011.7∼2011.11, 2012.7∼2012.11, 2013.7∼2013.11, 2014.7∼2014.11,281 2015.7∼2015.11, 2016.7∼2016.11, 2017.7∼2017.11, 2018.7∼2018.11) by year, and sequentially282 employ each year as the test set and the remaining 16 years as the training set, that is, 17-fold283 cross-validation. All models maintain the same training settings and hyperparameters during the284 training phase. These data cover various complicated precipitation situations such as hurricanes,285 squall lines, different levels of rain, and sunny days. It is sufficient to select the rainy season of286 the year as the test set from the perspective of meteorology, as the climate of one area is normally287 stable.288
6.1 BASELINES289
The SISR/VSR and the spatial precipitation down-290 scaling are similar to some extent, so we argue that291 the SR models can be applied to the task as the292 benchmark models. The input of SISR is a single293 image, and the model infers a high-resolution image294 from it. Its main focus is to generate high-quality295 texture details to achieve pleasing visual effects. In296 contrast, VSR models input multiple frames of im-297 ages (e.g., 3 frames, 5 frames, e.t.c.). In our experi-298 ments, we employ 5 frames. The core idea of VSR299 models is to increase the resolution by complement-300 ing texture information between different frames. It301 is worth mentioning that VSR models generally are302 equipped with a motion estimation module to alle-303 viate the challenge of object motion to inter-frame304 information registration.305
We evaluated 7 state-of-the-art SISR frameworks306 (i.e., Bicubic (Keys (1981)), SRCNN1 (Dong307 et al. (2016)), SRGAN2 (Ledig et al. (2017)),308
1https://github.com/yjn870/SRCNN-pytorch 2https://github.com/leftthomas/SRGAN
EDSR3 (Lim et al. (2017)), ESRGAN4 (Wang et al. (2018)), DBPN5 (Haris et al. (2018)),309 RCAN6 (Zhang et al. (2018)) and 5 VSR frameworks (i.e., SRGAN-V, EDSR-V, ESRGAN-V,310 RBPN7 (Haris et al. (2019)), EDVR8 (Wang et al. (2019)), of which 3 VSR methods (i.e., SRGAN-311 V, EDSR-V, ESRGAN-V) are modified from SISR. In particular, we build SRGAN-V, EDSR-V and312 ESRGAN-V by concatenating multiple frames of precipitation maps as the input of the model. In313 addition, we also evaluated the traditional statistics method Kriging (Stein (2012)), which is widely314 applied in weather forecasting. The mentioned 8 metrics are used to quantitatively evaluate the315 performance of these SR models and our method. Further, we select some disastrous weather as316 samples for qualitative analysis to test the model’s ability to learn the dynamic properties of the317 weather system. And we employ the implementation of Pytorch for Bicubic. We use 4 NVIDIA318 2080 Ti GPUs for training. We train all models with following setting. The batch size is set as 24.319 Precipitation maps are random crop into 64× 64. We employ the Adam optimizer, beta1 is 0.9, and320 beta2 is 0.99. The initial learning rate is 0.001, which is reduced to 1/10 every 50 epochs, and a total321 of 200 epochs are trained. We evaluate benchmark frameworks with 17-fold cross-validation. The322 downscaling performances are shown in Tab. 1. We divide the indicators mentioned above into two323 groups. PDEM measures the model’s ability to learn the dynamics of precipitation. PEM illustrates324 the model’s ability to reconstruct precipitation.325
From Tab. 1, we can learn that the overall performance of the VSR methods are better than SISR326 models, which shows that the dynamic properties mentioned above are extremely important for the327 downscaling model. Furthermore, it can be seen from Fig. 3 that the SISR method is clustered in the328 upper right corner of the scatter plot, and the VSR method is concentrated in the lower-left corner,329 which further shows that the dynamic properties of the VSR methods are overall better than the SISR330 methods. In addition, our method achieves the 1st best performance in RMSE, PDE, and achieve the331 second-best performance on PEM. The score shows that the implicit dynamic estimation framework332
3https://github.com/sanghyun-son/EDSR-PyTorch 4https://github.com/xinntao/ESRGAN 5https://github.com/alterzero/DBPN-Pytorch 6https://github.com/yulunzhang/RCAN 7https://github.com/alterzero/RBPN-PyTorch 8https://github.com/xinntao/EDVR
used is feasible and effective. It is worth mentioning that the traditional downscaling method Kriging333 performs better than many deep learning models (e.g., SRGAN, ESRGAN)334
6.1.1 QUALITATIVE ANALYSIS335
We visualized the tropical cyclone precipitation map of the 166th hour (6th) in September 2010336 and the high-resolution precipitation map generated by different methods. As shown in Fig. 4, the337 best perceptual effects are generated by EDVR and Our framework. Zooming in the result image,338 the precipitation maps generated by SRGAN and EDSR present obvious checkerboard artifacts.339 The reason for the checkerboard artifacts should be the relatively simple and sparse texture pattern340 in precipitation maps. The results generated by Bicubic, RCAN, Kriging, and SRCNN are over-341 smooth. DBPN even cannot reconstruct the eye of the hurricane. Especially, the result generated by342 Kriging is as fuzzy as the input LR precipitation map. In conclusion, the visual effects generated343 by the VSR methods are generally better than the SISR methods and the traditional method. From344 the perspective of quantitative and qualitative analysis, the dynamics estimation framework is very345 critical for downscaling.346
7 CONCLUSION347
In this paper, we built the first large-scale real precipitation downscaling dataset for the deep learning348 community. This dataset has 62424 pairs of HR and LR precipitation maps in total. We believe this349 dataset will further accelerate the research on precipitation downscaling. Furthermore, we analyze350 the problem in-depth and put forward three key challenges: temporal misalignment, temporal sparse,351 fluid properties. In addition, we propose an implicit dynamic estimation model to alleviate the above352 challenges. At the same time, we evaluated the mainstream SISR and VSR models and found that353 none of these models can solve RainNet’s problems well. Therefore, the downscaling task on this354 dataset is still very challenging.355
This work still remains several open problems. Currently, the data domain of this research is limited356 to the eastern U.S. In future research, we would enlarge the dataset to a larger domain. The dataset357 is only a single variable now. In future research, we may include more variables, e.g. temperature358 and wind speed.359
REFERENCES360
Rico Angell and Daniel R. Sheldon. Inferring latent velocities from weather radar data using gaus-361 sian processes. In Advances in Neural Information Processing Systems 31: Annual Conference362 on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal,363 Canada, pp. 8998–9007, 2018.364
Peter Bauer, Alan Thorpe, and Gilbert Brunet. The quiet revolution of numerical weather prediction.365 Nature, 2015.366
P. Berrisford, D.P. Dee, P. Poli, R. Brugge, Mark Fielding, Manuel Fuentes, P.W. Kållberg,367 S. Kobayashi, S. Uppala, and Adrian Simmons. The era-interim archive version 2.0. 2011.368
Jose Caballero, Christian Ledig, Andrew P. Aitken, Alejandro Acosta, Johannes Totz, Zehan Wang,369 and Wenzhe Shi. Real-time video super-resolution with spatio-temporal networks and motion370 compensation. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR371 2017, Honolulu, HI, USA, July 21-26, 2017. IEEE Computer Society, 2017.372
PHILIPPE Courtier, J-N Thépaut, and Anthony Hollingsworth. A strategy for operational imple-373 mentation of 4d-var, using an incremental approach. Quarterly Journal of the Royal Meteorolog-374 ical Society, 1994.375
Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep376 convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell., 2016.377
Marie Ekström. Metrics to identify meaningful downscaling skill in wrf simulations of intense378 rainfall events. Environmental Modelling & Software, 79:267–284, 2016.379
Brian Groenke, Luke Madaus, and Claire Monteleoni. Climalign: Unsupervised statistical down-380 scaling of climate variables via normalizing flows. CoRR, 2020.381
Hoshin Vijai Gupta, Soroosh Sorooshian, and Patrice Ogou Yapo. Status of automatic calibration382 for hydrologic models: Comparison with multilevel expert calibration. Journal of hydrologic383 engineering, 4(2):135–143, 1999.384
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Deep back-projection networks385 for super-resolution. In 2018 IEEE Conference on Computer Vision and Pattern Recognition,386 CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018. IEEE Computer Society, 2018.387
Muhammad Haris, Gregory Shakhnarovich, and Norimichi Ukita. Recurrent back-projection net-388 work for video super-resolution. In IEEE Conference on Computer Vision and Pattern Recogni-389 tion, CVPR 2019, Long Beach, CA, USA, June 16-20, 2019. Computer Vision Foundation / IEEE,390 2019.391
Xiaogang He, Nathaniel W Chaney, Marc Schleiss, and Justin Sheffield. Spatial downscaling of392 precipitation using adaptable random forests. Water resources research, 2016.393
Yan Huang, Wei Wang, and Liang Wang. Bidirectional recurrent convolutional networks for multi-394 frame super-resolution. In Advances in Neural Information Processing Systems, 2015.395
Younghyun Jo, Seoung Wug Oh, Jaeyeon Kang, and Seon Joo Kim. Deep video super-resolution396 network using dynamic upsampling filters without explicit motion compensation. In 2018 IEEE397 Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA,398 June 18-22, 2018. IEEE Computer Society, 2018.399
Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and400 super-resolution. In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The401 Netherlands, October 11-14, 2016, Proceedings, Part II, 2016.402
Robert Keys. Cubic convolution interpolation for digital image processing. IEEE transactions on403 acoustics, speech, and signal processing, 1981.404
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro405 Acosta, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-406 realistic single image super-resolution using a generative adversarial network. In 2017 IEEE407 Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July408 21-26, 2017. IEEE Computer Society, 2017.409
Renjie Liao, Xin Tao, Ruiyu Li, Ziyang Ma, and Jiaya Jia. Video super-resolution via deep draft-410 ensemble learning. In 2015 IEEE International Conference on Computer Vision, ICCV 2015,411 Santiago, Chile, December 7-13, 2015. IEEE Computer Society, 2015.412
Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep resid-413 ual networks for single image super-resolution. In 2017 IEEE Conference on Computer Vision414 and Pattern Recognition Workshops, CVPR Workshops 2017, Honolulu, HI, USA, July 21-26,415 2017. IEEE Computer Society, 2017.416
Douglas Maraun, Martin Widmann, José M Gutiérrez, Sven Kotlarski, Richard E Chandler, Elke417 Hertig, Joanna Wibig, Radan Huth, and Renate AI Wilcke. Value: A framework to validate418 downscaling approaches for climate change studies. Earth’s Future, 3(1):1–14, 2015.419
Brian R. Nelson, Olivier P. Prat, D.-J. Seo, and Emad Habib. Assessment and Implications of420 NCEP Stage IV Quantitative Precipitation Estimates for Product Intercomparisons. Weather and421 Forecasting, 2016.422
NSF. Nsf geosciences directorate funding by institution type. AGI Report, 2020.423
Augustus Odena, Vincent Dumoulin, and Chris Olah. Deconvolution and checkerboard artifacts.424 Distill, 2016.425
SC Pryor and JT Schoof. Differential credibility assessment for statistical downscaling. Journal of426 Applied Meteorology and Climatology, 59(8):1333–1349, 2020.427
Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan428 Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skillful precipitation429 nowcasting using deep generative models of radar. Nature, 2021.430
Markus Reichstein, Gustau Camps-Valls, Bjorn Stevens, Martin Jung, Joachim Denzler, Nuno Car-431 valhais, et al. Deep learning and process understanding for data-driven earth system science.432 Nature, 2019.433
Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image434 recognition. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning435 Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceed-436 ings, 2015.437
Michael L Stein. Interpolation of spatial data: some theory for kriging. Springer Science & Business438 Media, 2012.439
Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-440 resolution. In IEEE International Conference on Computer Vision, ICCV 2017, Venice, Italy,441 October 22-29, 2017. IEEE Computer Society, 2017.442
Matias Tassano, Julie Delon, and Thomas Veit. Fastdvdnet: Towards real-time deep video denois-443 ing without flow estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern444 Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.445
Yapeng Tian, Yulun Zhang, Yun Fu, and Chenliang Xu. TDAN: temporally-deformable alignment446 network for video super-resolution. In 2020 IEEE/CVF Conference on Computer Vision and447 Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020. IEEE, 2020.448
Mark S. Veillette, Siddharth Samsi, and Christopher J. Mattioli. SEVIR : A storm event imagery449 dataset for deep learning applications in radar and satellite meteorology. In Advances in Neu-450 ral Information Processing Systems 33: Annual Conference on Neural Information Processing451 Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.452
Daniel Walton, Neil Berg, David Pierce, Ed Maurer, Alex Hall, Yen-Heng Lin, Stefan Rahimi, and453 Dan Cayan. Understanding differences in california climate projections produced by dynamical454 and statistical downscaling. Journal of Geophysical Research: Atmospheres, 2020.455
Xintao Wang, Ke Yu, Shixiang Wu, Jinjin Gu, Yihao Liu, Chao Dong, Yu Qiao, and Chen456 Change Loy. Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings457 of the European Conference on Computer Vision (ECCV), pp. 0–0, 2018.458
Xintao Wang, Kelvin CK Chan, Ke Yu, Chao Dong, and Chen Change Loy. Edvr: Video restoration459 with enhanced deformable convolutional networks. In Proceedings of the IEEE Conference on460 Computer Vision and Pattern Recognition Workshops, 2019.461
BL White, A Singh, and A Albert. Downscaling numerical weather models with gans. AGUFM,462 2019.463
Adrienne M Wootten, Elias C Massoud, Agniv Sengupta, Duane E Waliser, and Huikyo Lee. The464 effect of statistical downscaling on the weighting of multi-model ensembles of precipitation. Cli-465 mate, 8(12):138, 2020.466
Youlong Xia, Kenneth Mitchell, Michael Ek, Justin Sheffield, Brian Cosgrove, Eric Wood, Lifeng467 Luo, Charles Alonge, Helin Wei, Jesse Meng, Ben Livneh, Dennis Lettenmaier, Victor Koren,468 Qingyun Duan, Kingtse Mo, Yun Fan, and David Mocko. Continental-scale water and energy469 flux analysis and validation for the north american land data assimilation system project phase470 2 (nldas-2): 1. intercomparison and application of model products. Journal of Geophysical Re-471 search: Atmospheres, 2012.472
Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, and Chenliang Xu. Zooming473 slow-mo: Fast and accurate one-stage space-time video super-resolution. In 2020 IEEE/CVF474 Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June475 13-19, 2020. IEEE, 2020.476
Tianfan Xue, Baian Chen, Jiajun Wu, Donglai Wei, and William T. Freeman. Video enhancement477 with task-oriented flow. Int. J. Comput. Vis., 2019.478
Xuebin Zhang and Feng Yang. Rclimdex (1.0) user manual. Climate Research Branch Environment479 Canada, 22, 2004.480
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu. Image super-481 resolution using very deep residual channel attention networks. In Computer Vision - ECCV482 2018 - 15th European Conference, Munich, Germany, September 8-14, 2018, Proceedings, Part483 VII, Lecture Notes in Computer Science. Springer, 2018.484 | 1. What is the focus and contribution of the paper regarding weather event predictions?
2. What are the strengths of the proposed approach, particularly in terms of creating a high/low res pair dataset?
3. What are the weaknesses of the paper, especially regarding its applications and uncertainty estimation?
4. How can the paper improve its evaluation and analysis, both quantitatively and qualitatively? | Summary Of The Paper
Review | Summary Of The Paper
The paper provides a baseline method that generates a dataset of high+low res precipitation maps. The paper argues that precision of weather event predictions depend on the imagery resolution and hence having high res maps will lead to better weather prediction. The eventual dataset that is formed contains a variety of diverse climate events like hurricanes etc and the pair of high/low res forms the necessary annotation.
Review
Strength:
Creates and releases a high/low res pair dataset of real images which are ML-ready and instrumental in kicking of climate research.
Provide well thought out experiments (reproducing other methods etc) along with their baselines on their new metric
Weakness:
One way to evaluate how much better are high res images is to use them for potential Applications to downstream tasks which is not provided.
Any method that converts low res to high res, generates some information and so uncertainty maps help instill confidence in predictions. The paper does not provide any uncertainty estimation.
Qualitative analysis of methods such as visually seeing their utility could help a domain scientist use this method; but qualitative analysis is lacking. |
ICLR | Title
Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks
Abstract
Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena. A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase. Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows. Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified. Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells. Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate. It is expected to significantly increase the amount of video data that can be processed.
1 INTRODUCTION
1.1 ACTIVE MATTER AND ACTIVE NEMATICS
Active materials encompass a broad range of systems that convert chemical energy into mechanical work. These systems self-organize and exhibit structure on time and length scales that exceed that of the constituent particles. One important class of active materials is active-nematics composed of anisotropic particles that exert extensile or contractile stresses on neighboring particles. Natural examples of active nematics include cultures of dividing E. coli and animal cells Doostmohammadi et al. (2016); Segerer et al. (2015); Duclos et al. (2014; 2017). In active nematics, a single particle does not impart a net force into the system and therefore does self-propel; however, the collective motion of many particles create non-trivial material flows, Marchetti et al. (2013). Understanding the fundamental physics of collective motion of nematics can yield insight wound healing, tumor growth, and bacterial film dynamics, Doostmohammadi et al. (2018).
1.2 2D CONFINED ACTIVE NEMATICS AND TOPOLOGICAL DEFECTS
In this work we study a model quasi-2D active nematic system composed of microtubules (MT) and motor proteins with tunable mechanical and dynamical properties developed by the Dogic group Henkin et al. (2014); DeCamp et al. (2015). In this system, a suspension of microtubules and motor proteins is sedimented to a surfactant-stabilized oil-water interface creating a dense, liquid-crystalline nematic phase characterized by local orientational order. Motor proteins drive neighboring MTs to slide anti-parallel to one another, creating extensile stresses.The extensile stress makes the MT nematic inherently unstable to bend fluctuations. Undulations in the alignment of the MTs therefore grow in time, eventually saturating into localized topological disclinations shown in figure 1. The defects are characterized by their winding number Kamien (2002). Unlike defects in passive liquid crystals, these defects create fluid flows. The comet-shaped +1/2 defects are selfpropelling while the three-fold symmetry of the -1/2 defects create active flows that result in to net translation of the structure; active flows are shown by the yellow arrows in figure 1.
The hydrodynamic interactions between multiple defects create chaotic, turbulent-like flows. By further confining the 2D nematic to microfluidic wells, these flows can be tamed into circulating vortices Woodhouse & Goldstein (2012); Norton et al. (2018). Defect dynamics play a key role in moderating the transition from bulk-like turbulence to regular circulation. Quantifying their dynamics will help develop a better understanding of the fluid dynamics governing the behavior of this complex fluid and perhaps give insights into generic features of motile defects in biological systems. This requires researchers to analyze a large volume of videos of the 2D confined active nematics.
1.3 TRADITIONAL DEFECT DETECTING ALGORITHM AND LIMITATIONS
Remarkably utilizing the defining feature of a topological defect – its winding number, researchers developed an image processing algorithm which first generates a direction field from each image and then calculate the winding number at regions across the whole image to search for singularities. This algorithm performs reasonably well in detecting both +1/2 and -1/2 defects while being limited by two factors: various visual quality and noises caused by imperfect experimental settings. Parameters in the algorithm need to be fine-tuned for datasets with different visual qualities which would require huge amount of human labor as well as knowledge of image processing. The traditional algorithm also suffers from noises in the data including overexposure in some region, slightly unfocused image, materials exceeding above the 2D confinement plane, etc., which tend to ruin the direction fields extracted from the images resulting poor detecting results. Aside from precision, this algorithm involves intensive calculation when processing one image. Fine-tuning of parameters and slow processing speed greatly limits the efficiency of data analysis in the experiments, while the algorithm’s lack of robustness compromising the detecting results.
1.4 PROPOSAL OF SOLUTION
To address the problems existing in the traditional defect detecting algorithm, we propose to apply deep learning, specifically deep convolutional neural network to replace the above defect detecting algorithm that is based on traditional image processing techniques. Our tasks include:
• Label the positions of defects in representative experimental data. • Train a YOLO network with bounding boxes generated according to the positions of the
labelled defects.
• Test the performance of the trained defect detection model on experimental data with various visual qualities and sizes
The changing visual quality, which greatly challenges the the traditional algorithm, improves the performance of our method. Adding more defects in various lighting conditions and image resolutions improves both robustness and precision of our algorithm.
Apart from the visual quality, the diversity of the defects configuration causes troubles in traditional detecting algorithm. However, with defects in different shapes, sizes, configuration, and sometimes
occlusion, our algorithm is able to generalize its model better to fit in these various samples, compromising the effect of overfitting.
Another important advantage is our detection algorithms utilizes a unified, end-to-end neural network which enables efficient training and testing process.
2 PATTERNS IN DATASET
The labeled dataset consists of 9 typical videos from different experimental settings. In each experiment, active nematics are confined by a circular plate. Due to the constant input of energy, the active nematics continuously flows in a turbulent-like manner. Figure 2 and 3 shows a snapshot from each video. Specifically, the first testing dataset is from the same video as the first training dataset but are separated by 500 frames so that the first image of testing dataset has completely different appearance than the last image of the training dataset.
Figure 4 shows the moving trajectory of a typical +1/2 defect from nucleation to annihilation. In the first frame, a pair of +1/2 and -1/2 defects nucleates with +1/2 defect moving away from -1/2 defects. In the next six frames, the +1/2 defect rotates around the center while being pushed outwards by inner materials, encountering a -1/2 defects. In the last frame, the +1/2 and -1/2 defects annihilate concurrently, balancing the topological charge of the region.
3 METHODS
When dealing with defects detection, the most intuitive way is to find the characteristics of a defect which differ it from its neighborhood area and make use of these characteristics to locate the regions containing defects. The defining difference between a defect region and a non-defect one is the topological winding number. A region containing a defect has a winding number of +1/2 or -1/2 around its boundary while the winding number associated to a non-defect region is strictly 0. Previously, researchers made use of this definition and developed an effective algorithm by finding the direction field – the “grains” of an image, and calculating the winding number of different regions. As a result, this algorithm requires preprocessing to extract the direction field from the original image. Meanwhile, the performance of the detecting results relies heavily on the quality of the direction field extracted from the original image. Due to the imperfections and noises in the images, preprocessing is challenging. In some cases, the defects are occluded by overexposure area or are unfocused, leading to a problematic direction field near the defect. In addition, in the experimental video, materials such as hairs and dirts are observed which sometimes causes false detection.
To resolve these problems, we propose to treat this task as an object detection task where we consider defects as the objects we are trying to detect in an image. We train a convolutional neural network with images containing defects whose locations are provided. During detection, our model generates a bounding box around defects as detecting results. In this method, the problems in the previous paragraph, namely increasing the size of dataset from various visual quality, diversity of defects configuration and size, help our model generalize a defect’s features better. Compared to the traditional algorithm which requires fine-tuning for different datasets, our method could efficiently provide reliable detecting results on data across various visual quality.
3.1 DATA COLLECTION
To create a training dataset, we manually labeled the positions of positive defects and negative defects in 8800 images. In an image, we labeled a comet-like (+1/2) defect at the point where materials change orientation most sharply. For a trefoil-like (-1/2) defect, we labeled them at the center of the triangular material-devoid region. Figure 5a is an example of labeled image with +1/2 defects labeled by red stars and -1/2 defects labeled by green circles.
3.2 SUB-CLASSIFICATION FOR +1/2 DEFECTS
The experimental datasets we work on are challenging to perform object detection. One main challenge is the varying sizes of +1/2 defects. In Figure 5b, there are 4 +1/2 defects by definition but defect 1 looks drastically different from defect 2, 3, and 4. We call +1/2 defects similar to defect 1 “hollow” +1/2 defects. Assigning bounding boxes with the same size for all four defects in this image is problematic. A small bounding box cannot include enough local information to recognize defect 1 as +1/2 defect, while big bounding boxes might include features of other defects which confuse the model. For defect 3 and 4, big bounding boxes will include the -1/2 defects next to them. To resolve this issue, since hollow +1/2 defects are commonly seen in the video, we created a separate class for these defects when we labeled the data and assigned slightly bigger bounding boxes for this class.
After labeling all the data, we assigned a bounding box to each labeled position. Since more contextual information is needed to detect hollow +1/2 defect than to detect normal +1/2 or -1/2 defects, we assigned the sizes of the bounding boxes with a ratio of 3:3:4 for +1/2, -1/2, hollow +1/2, respectively.
3.3 IMAGE AUGMENTATION
Aside from normal image augmentation such as jitter and scaling, we added a random flipping and random rotation to the training images. Since the region of a defect is anisotropic, rotating the image effectively increase the robustness of the model.
3.4 COMPUTER VISION PIPELINE
In recent years, after the wide application of convolutional neural network in computer vision, the state-of-the-art object detection algorithm can be divided into two main categories. In the first category, input images are first processed by a method named selective search Uijlings et al. (2013) which generates thousands of region proposals that are likely to contain an object. These region proposals along with the input images will be passed to a CNN which performs feature extractions and image re-sampling. The results will eventually be generated by a classifier. This category of algorithms is the first to combine a selective search method with deep CNN where selective search perform the task of object localization and CNN performs the task of classification. RCNN series (including R-CNN Girshick et al. (2014), Fast R-CNN Girshick (2015), and Faster R-CNN Ren et al. (2015)) belongs to this catogary. The second category is YOLO Redmon et al. (2016), which implements an end-to-end deep neural network that performs classification and localization all at once. The input images are first separated by 13 by 13 grid cells where each grid cell is responsible for detecting the object falling into it. The output of the network is forced to be a tensor which includes the coordinates of bounding boxes and a confidence value. YOLO outperforms its counterpart in speed but lack in average precision.
In this project, we chose to implement YOLO because it is fast and easy to optimize due to its endto-end design. While implementing the YOLO algorithm, we mainly used the original setting in its paper except for some changes in the pipeline:
• Changed the number of filter in the last convolutional layer to fit in our dataset. • Decreased the jitter value from 0.3 to 0.05 to avoid occlusion or missing of defects located
near the boundary
• Since our objects-defects are relatively smaller than objects in the PASCAL VOC dataset, we increased the non-object coefficient from 0.5 to 0.8 to increase the loss from confidence predictions for boxes that do not contain objects
• Took out the image augmentation regarding hue, exposure, and saturation since our images are binary
4 RESULTS
4.1 DETECTING RESULTS COMPARISON
Table 1 and 2 displays the evaluation results for +1/2 defects, -1/2 defects, and the combination of two classes for both traditional method and YOLO on three testing datasets separately and altogether. The evaluation results include precision, recall, and F1-score values. Total testing dataset contains 1700 images, with 1500 from dataset 1, 100 from dataset 2, and 100 from dataset 3.
When evaluating the results, we consider a prediction to be correct if there is a defect located inside the bounding box generated. Based on this criterion, we evaluated our model with precision, recall, and F1-score. Overall, we achieved an overall F1-score of 0.6568, 0.7902 for +1/2 defects, and 0.3960 for -1/2 defects. In comparison, the traditional image processing defect detecting algorithm obtains 0.6505 for overall F1-score and 0.7323, 0.4956 for +1/2 and -1/2 defects, respectively. This shows that our method is currently performing as good as the traditional method overall. As observed, the detecting results for -1/2 defects are particularly poor. For this matter, we believe two factors explain the week detecting results of -1/2 defects:
• -1/2 defects are more complicated in geometric configuration. The fact that the traditional method also performs worse for -1/2 defects than for +1/2 ones also proves that -1/2 defects are harder to detect.
• Unbalanced training dataset. Due to the topological constraint, each image is required to have 2 more +1/2 defects than -1/2 defects. Therefore, within the 8800 images which we labeled, there are 58536 +1/2 defects and 24518 -1/2 defects.
4.2 DETECTION RESULTS ACROSS DIFFERENT REGIONS
As shown in Figure 6, for both methods, the detection performances become worse as it approaches the boundary of the system. We believe there are three factors causing the imbalance of detecting results in our method:
• Defects appear more frequently at the central regions than near the boundary. In the testing dataset, there are 1734, 3383, and 2750 +1/2 and -1/2 defects located at outer, middle, inner regions respectively. Since the outer region is the largest in area but contains the least number of defects, the lower detection performance is to be expected.
• The nucleation and annihilation happens most often near the boundary. A nucleation or annihilation process is gradual, usually taking a few frames to transition from defect to non-defect or the other way around. In these frames, it is hard even for researchers to define the boundary between defect and non-defect.
• Defects are smaller in size near the boundary and have more various configurations. Defects in the central region are usually well-defined and clear while the defects near the boundary are usually small in size and vague in definition. Figure 7 shows the examples of +1/2 defects located near the boundary.
4.3 DETECTION RATE
We evaluated the processing rate of YOLO and traditional method with 1500 testing images. Table 3 shows the detection rates for the two methods. The traditional method is executed with a Desktop equipped with RAM: 16GB; CPU: Intel Core i5-2400 @ 3.1 Ghz; GPU: NVIDEA GeForce GTS 450. We perform our data processing, model training, and detection on a machine equipped with RAM: 128GB; CPU: 2×Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz; GPU: NVIDIA Corporation GP102 [TITAN X]. Although our method is operated by a machine with much stronger computational ability, our method’s significant advantage in speed is still obvious.
5 ALGORITHM DISCUSSION
While YOLO is the state-of-the-art object detection algorithms in images, it is not designed specifically for video. In other words, YOLO is not able to make detection decisions based on information from previous and next frames. One set of data in the active nematics experiments is usually a video containing up to twenty thousand frames. Typically, defects positions do not drastically change among consecutive frames, and the active system as a whole move relatively smoothly. Therefore, improving the design of YOLO to utilize the connection between frames will be expected to improve the detecting results. This intuition is supported by research carried out by Kang et al. (2017), who developed ”tubelets“ with R-CNN to incorporate temporal and contextual information into the decision making process. The method, named T-CNN is a successful design which has boosted the detecting results generated under R-CNN framework. Therefore, as our next step, we will incorporate the temporal and contextual information into YOLO’s pipeline in a similar way in order to improve the detecting results in our dataset.
6 CONCLUSION
Based on experimental results, our method has been shown to be the state-of-the-art in defects detection algorithms. Our method’s advantages include significantly faster processing rate, higher F1score on detecting results, and straightforward execution that could be operated by any researchers with simple python skills.
We have shown the viability of deep learning’s application in soft matter. Most of experiments in the field of soft matter involve experimental data in the form of images or videos and object detection is one of the most common tasks physicists face. We hope this paper could provide an effective alternative for physicists when they try to tackle similar tasks.
ACKNOWLEDGMENTS
Development of this defect detector was supported in part by the Brandeis MRSEC through grant NSF-MRSEC-1420382. | 1. What is the focus of the paper regarding the application of deep learning models?
2. What are the strengths of the proposed approach, particularly in its effectiveness and accuracy?
3. What are the weaknesses of the paper, especially regarding its technical contribution and experimental comparisons?
4. Do you have any concerns about the application of YOLO for defect detection tasks?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Summary:
This paper applies deep learning model YOLO to detect topological defects in 2D active nematics. Experimental results show that YOLO is robust and accurate, which outperforms traditional state-of-the-art defect detection methods significantly.
Pros:
+ Detecting defects in 2D active nematics is an important task to study.
+ YOLO is effective in object detection and shows good results for defect detection.
+ The experiment shows that YOLO appears to outperform traditional state-of-the-art defect detection methods.
Cons:
- The technical contribution seems not enough. YOLO is state-of-the-art object detection method and has been widely used. However, this paper directly applies YOLO for this task, while few variants have been specifically designed or modified for the defect detection tasks.
- The experiments may miss some details. For example, what is the traditional method used for comparison? What is the difference between traditional method and YOLO? The paper should provide some explanations and introductions.
- Since the training data set is imbalanced, does the proposed model utilize some strategy to overcome this problem?
- The detection rate comparison is not convincing. As shown in the experiments, traditional model and YOLO is operated by different machines, therefore, the detection rate comparison is not convincing.
- The paper contains some minors. For example, in table 1 and table 2, +1/2 defects should be -1/2. |
ICLR | Title
Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks
Abstract
Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena. A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase. Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows. Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified. Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells. Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate. It is expected to significantly increase the amount of video data that can be processed.
1 INTRODUCTION
1.1 ACTIVE MATTER AND ACTIVE NEMATICS
Active materials encompass a broad range of systems that convert chemical energy into mechanical work. These systems self-organize and exhibit structure on time and length scales that exceed that of the constituent particles. One important class of active materials is active-nematics composed of anisotropic particles that exert extensile or contractile stresses on neighboring particles. Natural examples of active nematics include cultures of dividing E. coli and animal cells Doostmohammadi et al. (2016); Segerer et al. (2015); Duclos et al. (2014; 2017). In active nematics, a single particle does not impart a net force into the system and therefore does self-propel; however, the collective motion of many particles create non-trivial material flows, Marchetti et al. (2013). Understanding the fundamental physics of collective motion of nematics can yield insight wound healing, tumor growth, and bacterial film dynamics, Doostmohammadi et al. (2018).
1.2 2D CONFINED ACTIVE NEMATICS AND TOPOLOGICAL DEFECTS
In this work we study a model quasi-2D active nematic system composed of microtubules (MT) and motor proteins with tunable mechanical and dynamical properties developed by the Dogic group Henkin et al. (2014); DeCamp et al. (2015). In this system, a suspension of microtubules and motor proteins is sedimented to a surfactant-stabilized oil-water interface creating a dense, liquid-crystalline nematic phase characterized by local orientational order. Motor proteins drive neighboring MTs to slide anti-parallel to one another, creating extensile stresses.The extensile stress makes the MT nematic inherently unstable to bend fluctuations. Undulations in the alignment of the MTs therefore grow in time, eventually saturating into localized topological disclinations shown in figure 1. The defects are characterized by their winding number Kamien (2002). Unlike defects in passive liquid crystals, these defects create fluid flows. The comet-shaped +1/2 defects are selfpropelling while the three-fold symmetry of the -1/2 defects create active flows that result in to net translation of the structure; active flows are shown by the yellow arrows in figure 1.
The hydrodynamic interactions between multiple defects create chaotic, turbulent-like flows. By further confining the 2D nematic to microfluidic wells, these flows can be tamed into circulating vortices Woodhouse & Goldstein (2012); Norton et al. (2018). Defect dynamics play a key role in moderating the transition from bulk-like turbulence to regular circulation. Quantifying their dynamics will help develop a better understanding of the fluid dynamics governing the behavior of this complex fluid and perhaps give insights into generic features of motile defects in biological systems. This requires researchers to analyze a large volume of videos of the 2D confined active nematics.
1.3 TRADITIONAL DEFECT DETECTING ALGORITHM AND LIMITATIONS
Remarkably utilizing the defining feature of a topological defect – its winding number, researchers developed an image processing algorithm which first generates a direction field from each image and then calculate the winding number at regions across the whole image to search for singularities. This algorithm performs reasonably well in detecting both +1/2 and -1/2 defects while being limited by two factors: various visual quality and noises caused by imperfect experimental settings. Parameters in the algorithm need to be fine-tuned for datasets with different visual qualities which would require huge amount of human labor as well as knowledge of image processing. The traditional algorithm also suffers from noises in the data including overexposure in some region, slightly unfocused image, materials exceeding above the 2D confinement plane, etc., which tend to ruin the direction fields extracted from the images resulting poor detecting results. Aside from precision, this algorithm involves intensive calculation when processing one image. Fine-tuning of parameters and slow processing speed greatly limits the efficiency of data analysis in the experiments, while the algorithm’s lack of robustness compromising the detecting results.
1.4 PROPOSAL OF SOLUTION
To address the problems existing in the traditional defect detecting algorithm, we propose to apply deep learning, specifically deep convolutional neural network to replace the above defect detecting algorithm that is based on traditional image processing techniques. Our tasks include:
• Label the positions of defects in representative experimental data. • Train a YOLO network with bounding boxes generated according to the positions of the
labelled defects.
• Test the performance of the trained defect detection model on experimental data with various visual qualities and sizes
The changing visual quality, which greatly challenges the the traditional algorithm, improves the performance of our method. Adding more defects in various lighting conditions and image resolutions improves both robustness and precision of our algorithm.
Apart from the visual quality, the diversity of the defects configuration causes troubles in traditional detecting algorithm. However, with defects in different shapes, sizes, configuration, and sometimes
occlusion, our algorithm is able to generalize its model better to fit in these various samples, compromising the effect of overfitting.
Another important advantage is our detection algorithms utilizes a unified, end-to-end neural network which enables efficient training and testing process.
2 PATTERNS IN DATASET
The labeled dataset consists of 9 typical videos from different experimental settings. In each experiment, active nematics are confined by a circular plate. Due to the constant input of energy, the active nematics continuously flows in a turbulent-like manner. Figure 2 and 3 shows a snapshot from each video. Specifically, the first testing dataset is from the same video as the first training dataset but are separated by 500 frames so that the first image of testing dataset has completely different appearance than the last image of the training dataset.
Figure 4 shows the moving trajectory of a typical +1/2 defect from nucleation to annihilation. In the first frame, a pair of +1/2 and -1/2 defects nucleates with +1/2 defect moving away from -1/2 defects. In the next six frames, the +1/2 defect rotates around the center while being pushed outwards by inner materials, encountering a -1/2 defects. In the last frame, the +1/2 and -1/2 defects annihilate concurrently, balancing the topological charge of the region.
3 METHODS
When dealing with defects detection, the most intuitive way is to find the characteristics of a defect which differ it from its neighborhood area and make use of these characteristics to locate the regions containing defects. The defining difference between a defect region and a non-defect one is the topological winding number. A region containing a defect has a winding number of +1/2 or -1/2 around its boundary while the winding number associated to a non-defect region is strictly 0. Previously, researchers made use of this definition and developed an effective algorithm by finding the direction field – the “grains” of an image, and calculating the winding number of different regions. As a result, this algorithm requires preprocessing to extract the direction field from the original image. Meanwhile, the performance of the detecting results relies heavily on the quality of the direction field extracted from the original image. Due to the imperfections and noises in the images, preprocessing is challenging. In some cases, the defects are occluded by overexposure area or are unfocused, leading to a problematic direction field near the defect. In addition, in the experimental video, materials such as hairs and dirts are observed which sometimes causes false detection.
To resolve these problems, we propose to treat this task as an object detection task where we consider defects as the objects we are trying to detect in an image. We train a convolutional neural network with images containing defects whose locations are provided. During detection, our model generates a bounding box around defects as detecting results. In this method, the problems in the previous paragraph, namely increasing the size of dataset from various visual quality, diversity of defects configuration and size, help our model generalize a defect’s features better. Compared to the traditional algorithm which requires fine-tuning for different datasets, our method could efficiently provide reliable detecting results on data across various visual quality.
3.1 DATA COLLECTION
To create a training dataset, we manually labeled the positions of positive defects and negative defects in 8800 images. In an image, we labeled a comet-like (+1/2) defect at the point where materials change orientation most sharply. For a trefoil-like (-1/2) defect, we labeled them at the center of the triangular material-devoid region. Figure 5a is an example of labeled image with +1/2 defects labeled by red stars and -1/2 defects labeled by green circles.
3.2 SUB-CLASSIFICATION FOR +1/2 DEFECTS
The experimental datasets we work on are challenging to perform object detection. One main challenge is the varying sizes of +1/2 defects. In Figure 5b, there are 4 +1/2 defects by definition but defect 1 looks drastically different from defect 2, 3, and 4. We call +1/2 defects similar to defect 1 “hollow” +1/2 defects. Assigning bounding boxes with the same size for all four defects in this image is problematic. A small bounding box cannot include enough local information to recognize defect 1 as +1/2 defect, while big bounding boxes might include features of other defects which confuse the model. For defect 3 and 4, big bounding boxes will include the -1/2 defects next to them. To resolve this issue, since hollow +1/2 defects are commonly seen in the video, we created a separate class for these defects when we labeled the data and assigned slightly bigger bounding boxes for this class.
After labeling all the data, we assigned a bounding box to each labeled position. Since more contextual information is needed to detect hollow +1/2 defect than to detect normal +1/2 or -1/2 defects, we assigned the sizes of the bounding boxes with a ratio of 3:3:4 for +1/2, -1/2, hollow +1/2, respectively.
3.3 IMAGE AUGMENTATION
Aside from normal image augmentation such as jitter and scaling, we added a random flipping and random rotation to the training images. Since the region of a defect is anisotropic, rotating the image effectively increase the robustness of the model.
3.4 COMPUTER VISION PIPELINE
In recent years, after the wide application of convolutional neural network in computer vision, the state-of-the-art object detection algorithm can be divided into two main categories. In the first category, input images are first processed by a method named selective search Uijlings et al. (2013) which generates thousands of region proposals that are likely to contain an object. These region proposals along with the input images will be passed to a CNN which performs feature extractions and image re-sampling. The results will eventually be generated by a classifier. This category of algorithms is the first to combine a selective search method with deep CNN where selective search perform the task of object localization and CNN performs the task of classification. RCNN series (including R-CNN Girshick et al. (2014), Fast R-CNN Girshick (2015), and Faster R-CNN Ren et al. (2015)) belongs to this catogary. The second category is YOLO Redmon et al. (2016), which implements an end-to-end deep neural network that performs classification and localization all at once. The input images are first separated by 13 by 13 grid cells where each grid cell is responsible for detecting the object falling into it. The output of the network is forced to be a tensor which includes the coordinates of bounding boxes and a confidence value. YOLO outperforms its counterpart in speed but lack in average precision.
In this project, we chose to implement YOLO because it is fast and easy to optimize due to its endto-end design. While implementing the YOLO algorithm, we mainly used the original setting in its paper except for some changes in the pipeline:
• Changed the number of filter in the last convolutional layer to fit in our dataset. • Decreased the jitter value from 0.3 to 0.05 to avoid occlusion or missing of defects located
near the boundary
• Since our objects-defects are relatively smaller than objects in the PASCAL VOC dataset, we increased the non-object coefficient from 0.5 to 0.8 to increase the loss from confidence predictions for boxes that do not contain objects
• Took out the image augmentation regarding hue, exposure, and saturation since our images are binary
4 RESULTS
4.1 DETECTING RESULTS COMPARISON
Table 1 and 2 displays the evaluation results for +1/2 defects, -1/2 defects, and the combination of two classes for both traditional method and YOLO on three testing datasets separately and altogether. The evaluation results include precision, recall, and F1-score values. Total testing dataset contains 1700 images, with 1500 from dataset 1, 100 from dataset 2, and 100 from dataset 3.
When evaluating the results, we consider a prediction to be correct if there is a defect located inside the bounding box generated. Based on this criterion, we evaluated our model with precision, recall, and F1-score. Overall, we achieved an overall F1-score of 0.6568, 0.7902 for +1/2 defects, and 0.3960 for -1/2 defects. In comparison, the traditional image processing defect detecting algorithm obtains 0.6505 for overall F1-score and 0.7323, 0.4956 for +1/2 and -1/2 defects, respectively. This shows that our method is currently performing as good as the traditional method overall. As observed, the detecting results for -1/2 defects are particularly poor. For this matter, we believe two factors explain the week detecting results of -1/2 defects:
• -1/2 defects are more complicated in geometric configuration. The fact that the traditional method also performs worse for -1/2 defects than for +1/2 ones also proves that -1/2 defects are harder to detect.
• Unbalanced training dataset. Due to the topological constraint, each image is required to have 2 more +1/2 defects than -1/2 defects. Therefore, within the 8800 images which we labeled, there are 58536 +1/2 defects and 24518 -1/2 defects.
4.2 DETECTION RESULTS ACROSS DIFFERENT REGIONS
As shown in Figure 6, for both methods, the detection performances become worse as it approaches the boundary of the system. We believe there are three factors causing the imbalance of detecting results in our method:
• Defects appear more frequently at the central regions than near the boundary. In the testing dataset, there are 1734, 3383, and 2750 +1/2 and -1/2 defects located at outer, middle, inner regions respectively. Since the outer region is the largest in area but contains the least number of defects, the lower detection performance is to be expected.
• The nucleation and annihilation happens most often near the boundary. A nucleation or annihilation process is gradual, usually taking a few frames to transition from defect to non-defect or the other way around. In these frames, it is hard even for researchers to define the boundary between defect and non-defect.
• Defects are smaller in size near the boundary and have more various configurations. Defects in the central region are usually well-defined and clear while the defects near the boundary are usually small in size and vague in definition. Figure 7 shows the examples of +1/2 defects located near the boundary.
4.3 DETECTION RATE
We evaluated the processing rate of YOLO and traditional method with 1500 testing images. Table 3 shows the detection rates for the two methods. The traditional method is executed with a Desktop equipped with RAM: 16GB; CPU: Intel Core i5-2400 @ 3.1 Ghz; GPU: NVIDEA GeForce GTS 450. We perform our data processing, model training, and detection on a machine equipped with RAM: 128GB; CPU: 2×Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz; GPU: NVIDIA Corporation GP102 [TITAN X]. Although our method is operated by a machine with much stronger computational ability, our method’s significant advantage in speed is still obvious.
5 ALGORITHM DISCUSSION
While YOLO is the state-of-the-art object detection algorithms in images, it is not designed specifically for video. In other words, YOLO is not able to make detection decisions based on information from previous and next frames. One set of data in the active nematics experiments is usually a video containing up to twenty thousand frames. Typically, defects positions do not drastically change among consecutive frames, and the active system as a whole move relatively smoothly. Therefore, improving the design of YOLO to utilize the connection between frames will be expected to improve the detecting results. This intuition is supported by research carried out by Kang et al. (2017), who developed ”tubelets“ with R-CNN to incorporate temporal and contextual information into the decision making process. The method, named T-CNN is a successful design which has boosted the detecting results generated under R-CNN framework. Therefore, as our next step, we will incorporate the temporal and contextual information into YOLO’s pipeline in a similar way in order to improve the detecting results in our dataset.
6 CONCLUSION
Based on experimental results, our method has been shown to be the state-of-the-art in defects detection algorithms. Our method’s advantages include significantly faster processing rate, higher F1score on detecting results, and straightforward execution that could be operated by any researchers with simple python skills.
We have shown the viability of deep learning’s application in soft matter. Most of experiments in the field of soft matter involve experimental data in the form of images or videos and object detection is one of the most common tasks physicists face. We hope this paper could provide an effective alternative for physicists when they try to tackle similar tasks.
ACKNOWLEDGMENTS
Development of this defect detector was supported in part by the Brandeis MRSEC through grant NSF-MRSEC-1420382. | 1. What is the focus of the paper, and how does it apply computer vision methods to identify topological defects in nematic liquid crystals?
2. What are the strengths and weaknesses of the proposed deep learning approach compared to traditional methods?
3. How does the paper investigate the limitations of the model, and what are the findings?
4. Is the venue of the paper appropriate, and what could be done to make it more relevant to machine learning researchers?
5. Are there any specific aspects of the paper that could be improved or expanded upon, such as analyzing the features learned by the architecture or discussing the physical insights gained from the bounding box size choice? | Review | Review
In this paper the authors apply methods developed in computer vision towards the identification of topological defects in nematic liquid crystals. Typically, defects are identified using a costly algorithm that is based on numerically computing the winding number at different locations in the image to identify defects. The authors demonstrate that a deep learning approach offers improvement to both the identification accuracy and rate at which defects can be identified. Finally, the authors do some work investigating the limitations of the model and show that breakdown occurs near the edge of the field of view of the microscope. They show that this also happens with a conventional approach.
Overall, this seemed like a nice application of machine learning to a subject that has received significant attention from soft matter community. The results appear to be carefully presented and the analysis seems solid. However, it does not seem to me that ICLR is a particularly appropriate venue for this work and it is unclear exactly what this paper adds to a discussion on machine learning. While there is nothing wrong with taking an existing architecture (YOLO) and showing that it can successfully be applied to another domain, it does limit the machine learning novelty. It also does not seem as though the authors pushed particularly hard in this direction. I would have been interested, for example, in seeing some analysis of the features learned by the architecture trained to classify defects appropriately.
I would encourage the authors to either submit this work to a journal closer to soft matter or to do some work to determine what insights and lessons might help machine learning researchers working on other applied projects. The closest I got from the paper was the discussion of bounding box sizes and subclassification in section 3. It would have been nice to see some work discussing the dependence on this choice and what physical insights one might be able to glean from it. |
ICLR | Title
Detecting Topological Defects in 2D Active Nematics Using Convolutional Neural Networks
Abstract
Active matter consists of active agents which transform energy extracted from surroundings into momentum, producing a variety of collective phenomena. A model, synthetic active system composed of microtubule polymers driven by protein motors spontaneously forms a liquid-crystalline nematic phase. Extensile stress created by the protein motors precipitates continuous buckling and folding of the microtubules creating motile topological defects and turbulent fluid flows. Defect motion is determined by the rheological properties of the material; however, these remain largely unquantified. Measuring defects dynamics can yield fundamental insights into active nematics, a class of materials that include bacterial films and animal cells. Current methods for defect detection lack robustness and precision, and require fine-tuning for datasets with different visual quality. In this study, we applied Deep Learning to train a defect detector to automatically analyze microscopy videos of the microtubule active nematic. Experimental results indicate that our method is robust and accurate. It is expected to significantly increase the amount of video data that can be processed.
1 INTRODUCTION
1.1 ACTIVE MATTER AND ACTIVE NEMATICS
Active materials encompass a broad range of systems that convert chemical energy into mechanical work. These systems self-organize and exhibit structure on time and length scales that exceed that of the constituent particles. One important class of active materials is active-nematics composed of anisotropic particles that exert extensile or contractile stresses on neighboring particles. Natural examples of active nematics include cultures of dividing E. coli and animal cells Doostmohammadi et al. (2016); Segerer et al. (2015); Duclos et al. (2014; 2017). In active nematics, a single particle does not impart a net force into the system and therefore does self-propel; however, the collective motion of many particles create non-trivial material flows, Marchetti et al. (2013). Understanding the fundamental physics of collective motion of nematics can yield insight wound healing, tumor growth, and bacterial film dynamics, Doostmohammadi et al. (2018).
1.2 2D CONFINED ACTIVE NEMATICS AND TOPOLOGICAL DEFECTS
In this work we study a model quasi-2D active nematic system composed of microtubules (MT) and motor proteins with tunable mechanical and dynamical properties developed by the Dogic group Henkin et al. (2014); DeCamp et al. (2015). In this system, a suspension of microtubules and motor proteins is sedimented to a surfactant-stabilized oil-water interface creating a dense, liquid-crystalline nematic phase characterized by local orientational order. Motor proteins drive neighboring MTs to slide anti-parallel to one another, creating extensile stresses.The extensile stress makes the MT nematic inherently unstable to bend fluctuations. Undulations in the alignment of the MTs therefore grow in time, eventually saturating into localized topological disclinations shown in figure 1. The defects are characterized by their winding number Kamien (2002). Unlike defects in passive liquid crystals, these defects create fluid flows. The comet-shaped +1/2 defects are selfpropelling while the three-fold symmetry of the -1/2 defects create active flows that result in to net translation of the structure; active flows are shown by the yellow arrows in figure 1.
The hydrodynamic interactions between multiple defects create chaotic, turbulent-like flows. By further confining the 2D nematic to microfluidic wells, these flows can be tamed into circulating vortices Woodhouse & Goldstein (2012); Norton et al. (2018). Defect dynamics play a key role in moderating the transition from bulk-like turbulence to regular circulation. Quantifying their dynamics will help develop a better understanding of the fluid dynamics governing the behavior of this complex fluid and perhaps give insights into generic features of motile defects in biological systems. This requires researchers to analyze a large volume of videos of the 2D confined active nematics.
1.3 TRADITIONAL DEFECT DETECTING ALGORITHM AND LIMITATIONS
Remarkably utilizing the defining feature of a topological defect – its winding number, researchers developed an image processing algorithm which first generates a direction field from each image and then calculate the winding number at regions across the whole image to search for singularities. This algorithm performs reasonably well in detecting both +1/2 and -1/2 defects while being limited by two factors: various visual quality and noises caused by imperfect experimental settings. Parameters in the algorithm need to be fine-tuned for datasets with different visual qualities which would require huge amount of human labor as well as knowledge of image processing. The traditional algorithm also suffers from noises in the data including overexposure in some region, slightly unfocused image, materials exceeding above the 2D confinement plane, etc., which tend to ruin the direction fields extracted from the images resulting poor detecting results. Aside from precision, this algorithm involves intensive calculation when processing one image. Fine-tuning of parameters and slow processing speed greatly limits the efficiency of data analysis in the experiments, while the algorithm’s lack of robustness compromising the detecting results.
1.4 PROPOSAL OF SOLUTION
To address the problems existing in the traditional defect detecting algorithm, we propose to apply deep learning, specifically deep convolutional neural network to replace the above defect detecting algorithm that is based on traditional image processing techniques. Our tasks include:
• Label the positions of defects in representative experimental data. • Train a YOLO network with bounding boxes generated according to the positions of the
labelled defects.
• Test the performance of the trained defect detection model on experimental data with various visual qualities and sizes
The changing visual quality, which greatly challenges the the traditional algorithm, improves the performance of our method. Adding more defects in various lighting conditions and image resolutions improves both robustness and precision of our algorithm.
Apart from the visual quality, the diversity of the defects configuration causes troubles in traditional detecting algorithm. However, with defects in different shapes, sizes, configuration, and sometimes
occlusion, our algorithm is able to generalize its model better to fit in these various samples, compromising the effect of overfitting.
Another important advantage is our detection algorithms utilizes a unified, end-to-end neural network which enables efficient training and testing process.
2 PATTERNS IN DATASET
The labeled dataset consists of 9 typical videos from different experimental settings. In each experiment, active nematics are confined by a circular plate. Due to the constant input of energy, the active nematics continuously flows in a turbulent-like manner. Figure 2 and 3 shows a snapshot from each video. Specifically, the first testing dataset is from the same video as the first training dataset but are separated by 500 frames so that the first image of testing dataset has completely different appearance than the last image of the training dataset.
Figure 4 shows the moving trajectory of a typical +1/2 defect from nucleation to annihilation. In the first frame, a pair of +1/2 and -1/2 defects nucleates with +1/2 defect moving away from -1/2 defects. In the next six frames, the +1/2 defect rotates around the center while being pushed outwards by inner materials, encountering a -1/2 defects. In the last frame, the +1/2 and -1/2 defects annihilate concurrently, balancing the topological charge of the region.
3 METHODS
When dealing with defects detection, the most intuitive way is to find the characteristics of a defect which differ it from its neighborhood area and make use of these characteristics to locate the regions containing defects. The defining difference between a defect region and a non-defect one is the topological winding number. A region containing a defect has a winding number of +1/2 or -1/2 around its boundary while the winding number associated to a non-defect region is strictly 0. Previously, researchers made use of this definition and developed an effective algorithm by finding the direction field – the “grains” of an image, and calculating the winding number of different regions. As a result, this algorithm requires preprocessing to extract the direction field from the original image. Meanwhile, the performance of the detecting results relies heavily on the quality of the direction field extracted from the original image. Due to the imperfections and noises in the images, preprocessing is challenging. In some cases, the defects are occluded by overexposure area or are unfocused, leading to a problematic direction field near the defect. In addition, in the experimental video, materials such as hairs and dirts are observed which sometimes causes false detection.
To resolve these problems, we propose to treat this task as an object detection task where we consider defects as the objects we are trying to detect in an image. We train a convolutional neural network with images containing defects whose locations are provided. During detection, our model generates a bounding box around defects as detecting results. In this method, the problems in the previous paragraph, namely increasing the size of dataset from various visual quality, diversity of defects configuration and size, help our model generalize a defect’s features better. Compared to the traditional algorithm which requires fine-tuning for different datasets, our method could efficiently provide reliable detecting results on data across various visual quality.
3.1 DATA COLLECTION
To create a training dataset, we manually labeled the positions of positive defects and negative defects in 8800 images. In an image, we labeled a comet-like (+1/2) defect at the point where materials change orientation most sharply. For a trefoil-like (-1/2) defect, we labeled them at the center of the triangular material-devoid region. Figure 5a is an example of labeled image with +1/2 defects labeled by red stars and -1/2 defects labeled by green circles.
3.2 SUB-CLASSIFICATION FOR +1/2 DEFECTS
The experimental datasets we work on are challenging to perform object detection. One main challenge is the varying sizes of +1/2 defects. In Figure 5b, there are 4 +1/2 defects by definition but defect 1 looks drastically different from defect 2, 3, and 4. We call +1/2 defects similar to defect 1 “hollow” +1/2 defects. Assigning bounding boxes with the same size for all four defects in this image is problematic. A small bounding box cannot include enough local information to recognize defect 1 as +1/2 defect, while big bounding boxes might include features of other defects which confuse the model. For defect 3 and 4, big bounding boxes will include the -1/2 defects next to them. To resolve this issue, since hollow +1/2 defects are commonly seen in the video, we created a separate class for these defects when we labeled the data and assigned slightly bigger bounding boxes for this class.
After labeling all the data, we assigned a bounding box to each labeled position. Since more contextual information is needed to detect hollow +1/2 defect than to detect normal +1/2 or -1/2 defects, we assigned the sizes of the bounding boxes with a ratio of 3:3:4 for +1/2, -1/2, hollow +1/2, respectively.
3.3 IMAGE AUGMENTATION
Aside from normal image augmentation such as jitter and scaling, we added a random flipping and random rotation to the training images. Since the region of a defect is anisotropic, rotating the image effectively increase the robustness of the model.
3.4 COMPUTER VISION PIPELINE
In recent years, after the wide application of convolutional neural network in computer vision, the state-of-the-art object detection algorithm can be divided into two main categories. In the first category, input images are first processed by a method named selective search Uijlings et al. (2013) which generates thousands of region proposals that are likely to contain an object. These region proposals along with the input images will be passed to a CNN which performs feature extractions and image re-sampling. The results will eventually be generated by a classifier. This category of algorithms is the first to combine a selective search method with deep CNN where selective search perform the task of object localization and CNN performs the task of classification. RCNN series (including R-CNN Girshick et al. (2014), Fast R-CNN Girshick (2015), and Faster R-CNN Ren et al. (2015)) belongs to this catogary. The second category is YOLO Redmon et al. (2016), which implements an end-to-end deep neural network that performs classification and localization all at once. The input images are first separated by 13 by 13 grid cells where each grid cell is responsible for detecting the object falling into it. The output of the network is forced to be a tensor which includes the coordinates of bounding boxes and a confidence value. YOLO outperforms its counterpart in speed but lack in average precision.
In this project, we chose to implement YOLO because it is fast and easy to optimize due to its endto-end design. While implementing the YOLO algorithm, we mainly used the original setting in its paper except for some changes in the pipeline:
• Changed the number of filter in the last convolutional layer to fit in our dataset. • Decreased the jitter value from 0.3 to 0.05 to avoid occlusion or missing of defects located
near the boundary
• Since our objects-defects are relatively smaller than objects in the PASCAL VOC dataset, we increased the non-object coefficient from 0.5 to 0.8 to increase the loss from confidence predictions for boxes that do not contain objects
• Took out the image augmentation regarding hue, exposure, and saturation since our images are binary
4 RESULTS
4.1 DETECTING RESULTS COMPARISON
Table 1 and 2 displays the evaluation results for +1/2 defects, -1/2 defects, and the combination of two classes for both traditional method and YOLO on three testing datasets separately and altogether. The evaluation results include precision, recall, and F1-score values. Total testing dataset contains 1700 images, with 1500 from dataset 1, 100 from dataset 2, and 100 from dataset 3.
When evaluating the results, we consider a prediction to be correct if there is a defect located inside the bounding box generated. Based on this criterion, we evaluated our model with precision, recall, and F1-score. Overall, we achieved an overall F1-score of 0.6568, 0.7902 for +1/2 defects, and 0.3960 for -1/2 defects. In comparison, the traditional image processing defect detecting algorithm obtains 0.6505 for overall F1-score and 0.7323, 0.4956 for +1/2 and -1/2 defects, respectively. This shows that our method is currently performing as good as the traditional method overall. As observed, the detecting results for -1/2 defects are particularly poor. For this matter, we believe two factors explain the week detecting results of -1/2 defects:
• -1/2 defects are more complicated in geometric configuration. The fact that the traditional method also performs worse for -1/2 defects than for +1/2 ones also proves that -1/2 defects are harder to detect.
• Unbalanced training dataset. Due to the topological constraint, each image is required to have 2 more +1/2 defects than -1/2 defects. Therefore, within the 8800 images which we labeled, there are 58536 +1/2 defects and 24518 -1/2 defects.
4.2 DETECTION RESULTS ACROSS DIFFERENT REGIONS
As shown in Figure 6, for both methods, the detection performances become worse as it approaches the boundary of the system. We believe there are three factors causing the imbalance of detecting results in our method:
• Defects appear more frequently at the central regions than near the boundary. In the testing dataset, there are 1734, 3383, and 2750 +1/2 and -1/2 defects located at outer, middle, inner regions respectively. Since the outer region is the largest in area but contains the least number of defects, the lower detection performance is to be expected.
• The nucleation and annihilation happens most often near the boundary. A nucleation or annihilation process is gradual, usually taking a few frames to transition from defect to non-defect or the other way around. In these frames, it is hard even for researchers to define the boundary between defect and non-defect.
• Defects are smaller in size near the boundary and have more various configurations. Defects in the central region are usually well-defined and clear while the defects near the boundary are usually small in size and vague in definition. Figure 7 shows the examples of +1/2 defects located near the boundary.
4.3 DETECTION RATE
We evaluated the processing rate of YOLO and traditional method with 1500 testing images. Table 3 shows the detection rates for the two methods. The traditional method is executed with a Desktop equipped with RAM: 16GB; CPU: Intel Core i5-2400 @ 3.1 Ghz; GPU: NVIDEA GeForce GTS 450. We perform our data processing, model training, and detection on a machine equipped with RAM: 128GB; CPU: 2×Intel(R) Xeon(R) CPU E5-2637 v3 @ 3.50GHz; GPU: NVIDIA Corporation GP102 [TITAN X]. Although our method is operated by a machine with much stronger computational ability, our method’s significant advantage in speed is still obvious.
5 ALGORITHM DISCUSSION
While YOLO is the state-of-the-art object detection algorithms in images, it is not designed specifically for video. In other words, YOLO is not able to make detection decisions based on information from previous and next frames. One set of data in the active nematics experiments is usually a video containing up to twenty thousand frames. Typically, defects positions do not drastically change among consecutive frames, and the active system as a whole move relatively smoothly. Therefore, improving the design of YOLO to utilize the connection between frames will be expected to improve the detecting results. This intuition is supported by research carried out by Kang et al. (2017), who developed ”tubelets“ with R-CNN to incorporate temporal and contextual information into the decision making process. The method, named T-CNN is a successful design which has boosted the detecting results generated under R-CNN framework. Therefore, as our next step, we will incorporate the temporal and contextual information into YOLO’s pipeline in a similar way in order to improve the detecting results in our dataset.
6 CONCLUSION
Based on experimental results, our method has been shown to be the state-of-the-art in defects detection algorithms. Our method’s advantages include significantly faster processing rate, higher F1score on detecting results, and straightforward execution that could be operated by any researchers with simple python skills.
We have shown the viability of deep learning’s application in soft matter. Most of experiments in the field of soft matter involve experimental data in the form of images or videos and object detection is one of the most common tasks physicists face. We hope this paper could provide an effective alternative for physicists when they try to tackle similar tasks.
ACKNOWLEDGMENTS
Development of this defect detector was supported in part by the Brandeis MRSEC through grant NSF-MRSEC-1420382. | 1. What is the focus of the paper in terms of its scientific problem and contribution?
2. Is the paper's contribution primarily related to machine learning or image processing?
3. What is the novelty of the paper regarding its application and use of a well-known neural model?
4. How convincing are the experiments and validation methods used in the paper?
5. Does the paper provide sufficient information about the traditional method used for comparison?
6. How significant is the performance gain achieved by YOLO compared to the traditional method?
7. Is the paper's contribution considered minor or insignificant regarding its impact on the field?
8. Are there any concerns regarding the double-blind policy in the paper's acknowledgment section? | Review | Review
This review will unfortunately be very short because I am afraid there is not much to say about this well written paper, which seems to have been sent to the wrong conference. The scientific problem is interesting, namely the detection of topological artifacts in images showing biological phenomena (which I don’t know much about). The relevant literature here is basically literature from this field, which is not machine learning and not even image processing. The contribution of the paper, in terms of machine learning, is to apply a well known neural model (YOLO) to detect bounding boxes of objects in images, which are very specific. The contribution here does not lie in machine learning, I am afraid.
This is thus a purely experimental paper on a single application, namely object detection in specific images. Unfortunately the experiments are not convincing. The results are validated against a “traditional method”, which has never been cited, so we do not know what it is.
The performance gain obtained with YOLO seems to be minor, although the difference in time complexity is quite enormous (to the advantage of YOLO).
The contribution is thus minor and for me does not justify publication at ICLR.
The grant number is mentioned in the acknowledgments, which seems to violate double blind policy. |
ICLR | Title
Learning to Write by Learning the Objective
Abstract
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
N/A
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
1 INTRODUCTION
Recurrent Neural Network (RNN) based language models such as Long Short-Term Memory Networks (LSTMs) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (GRUs) (Cho et al., 2014) have achieved enormous success across a variety of language tasks due to their ability to learn fluency patterns in natural language (Jozefowicz et al., 2016; Kim et al., 2016; Mikolov et al., 2010). When used as a generator, however, the quality of language generated from RNNs deviates drastically from that of human language. While fluent, RNN-produced language displays several degenerate characteristics, favoring generic and contentless output that tends to be repetitive and self-contradictory. These issues are especially prominent when RNNs are used for open-ended, long-form text generation, as illustrated in Figure 1.
RNNs model the conditional probability P (xt|x1, ..., xt−1) of generating the next word xt given all previous words observed or generated. In theory, this conditional model should be able to learn all crucial aspects of human language production, for example, that we don’t normally repeat the same content over and over. In practice, however, the learned conditional probability model often assigns higher probability to a repetitive, overly generic sentence than to higher quality sentences, as shown in Figure 1. We postulate that this is in part because the network architectures of RNN variants do not provide a strong enough inductive bias for the model to learn the complex communication goals pursued in human writing. In addition, long-term context easily gets lost as it is explained away in the presence of more immediately relevant short-term context (Yu et al., 2017), and as gradients diminish over a long sequence (Pascanu et al., 2013). Consequently, RNNs acquire relatively shallow and myopic patterns, which tend to only take advantage of a small fraction of the training set vocabulary Kiddon et al. (2016). RNNs are thus unable to generate language that matches the complexity and coherence of human generated text.
Several methods in the literature attempt to mitigate these issues. Overly simple and generic generation can be improved by using a diversity-boosting objective function (Shao et al., 2017; Vijayakumar et al., 2016). Repetitive generation can be reduced by prohibiting recurrence of the same trigrams as a hard rule (Paulus et al., 2017). Although such constraints form a partial solution, they
All in all, I would highly recommend this hotel to anyone who wants to be in the heart of the action, and want to be in the heart of the action. If you want to be in the heart of the action, this is not the place for you. However, If you want to be in the middle of the action, this is the place to be.
Figure 1: A Trip Advisor review generated by an RNN based LM trained on over a million reviews.
are generally too coarse and both penalize good behavior (e.g. reuse of an idiom) and fail to capture more complex bad behavior (e.g. paraphrasing of the same content again and again).
Hand tailoring rules is both time consuming and unstable across different generative scenarios, so we instead propose a general learning framework to construct a better decoding objective. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address limitations of the base RNN generator. Our learning framework therefore generalizes over various existing modifications to the decoding objective. Our approach learns to overcome the particular limitations of the RNN generator directly by incorporating language generated from RNNs as negative samples to discriminatively train several companion models, each specializing in a different aspect of Grice’s Maxims of communication (Grice et al. (1975)).
Empirical results demonstrate that our learning framework is highly effective in converting a generic RNN language model into a substantially stronger generator. Human evaluation confirms that language generated by our model is preferred over that of competitive baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
2 BACKGROUND: GRICE’S MAXIMS
We motivate our learning framework using Grice’s Maxims of communication (Grice et al., 1975):
1. Quantity: Make your contribution as informative as required, and no more than necessary. RNN generations tend to violate this maxim as they are often overly short and generic, and therefore less informative than desired. When encouraged to generate longer text, RNNs easily repeat themselves, which also works against conveying the right amount of information. This observation motivates the length and repetition models in §3.2.1 and §3.2.2.
2. Quality: Do not say what you believe to be false. When used for generation with long term context, RNNs often generate text that is self-contradictory. We propose an entailment model to address this problem in §3.2.3.
3. Relation: Be relevant. RNNs used for long generation can start digressing as the long-term context gets easily washed out. We address this problem by proposing two different relevance models in §3.2.4 and §3.2.5.
4. Manner: Avoid obscurity of expression. Avoid ambiguity. Be brief. Be orderly. As RNN generation favors highly probable but generic words and phrases, it lacks adequate style and specificity. We address this issue with the lexical style model in §3.2.6.
3 THE LEARNING FRAMEWORK
We propose a general learning framework for conditional language generation of sequence y given context x. The decoding objective for generation takes the general form:
f(x,y) = log(Plm(y|x)) + ∑ k λksk(x,y). (1)
This objective combines the RNN language model probability Plm (§3.1) with a set of additional scores sk(x,y) produced by discriminatively trained communication models (§3.2) and weighted
with learned mixture coefficients λk (§3.3). This corresponds to a Product of Experts (PoE) model (Hinton, 2006) when the scores sk are log probabilities. Further model and hyperparameter details are given in appendix B.
Generation is performed using beam search (§3.4), scoring partially generated candidate generations y1:i at each time step i. The RNN language model decomposes into per-word probabilities using the chain rule. However, in order to allow for more expressivity over long range context we do not require the discriminative model scores to factorize over the elements of y, thereby addressing a key limitation of RNNs. More specifically, we use an estimated score s′k(x,y1:i) that can be computed for any prefix of y = y1:n to approximate the objective during beam search, such that s′k(x,y1:n) = sk(x,y). The scoring functions are trained on prefixes of y (except where stated otherwise) to simulate their application to rank partial continuations during beam search.
3.1 BASE LANGUAGE MODEL
We train an RNN language model to estimate logPlm(s) = ∑ i logPlm(si|s1:i−1). (2)
The language model treats the context x and the continuation y as a single sequence s, in contrast to sequence to sequence models which distinguish between them.
3.2 COMPOSITE COMMUNICATION MODELS
Next we introduce a set of models motivated by Grice’s Maxims of communication. Each model is trained to discriminate between good and bad generation using a ranking objective, so that its model scores have the effect of re-ranking in the decoder. We vary the model parameterization and training examples to guide each model to focus on different aspects of Grice’s Maxims. The classifier scores are interpreted as classification probabilities and added to the objective function as log probabilities.
Let D = {(x1,y1), . . . (xn,yn)} be the set of training examples for conditional generation. Dx denote all contexts and Dy all continuations.
In all models the first layer embeds words into 300-dimensional vectors initialized with GloVe (Pennington et al., 2014) embeddings, which are fine-tuned during training unless stated otherwise. The dimensions of hidden layers are also 300. Let e(w) be the word embedding of word w.
3.2.1 LENGTH MODEL
RNNs tend to bias toward shorter generation even with a highly expressive network structure with attention, thus length normalization is still a common practice (Wu et al., 2016). We use a geometrically decaying length reward. We tune the initial value and decaying rate on the final systems, and use the same value for all systems with the initial value and decay rate being 1 and 0.9 respectively. The score per a time step (for a partially generated completion) is thus
slen(y1:i) = 0.9 i. (3)
3.2.2 REPETITION MODEL
The goal of this model is to learn to distinguish between RNN-generated and gold continuations by exploiting our empirical observation that repetitions are more common in completions generated by the RNN language model. We quantify the claim that samples from the base RNN is an appropriate source of negative examples for repetition detection on our datasets in §4.1. However, we do not want to completely discourage repetition, as words sometimes do recur in English. Thus we model natural levels of repetition as follows.
First a score di is computed for each token in the continuation y based on pairwise cosine similarity of word embeddings within a fixed window of the previous k words:
di = max j=i−k...i−1 (CosSim(e(yi), e(yj))). (4)
This score can be interpreted as the repetition “strength” of the sequence at position i, where di = 1 for all positions at which a word is repeated within k + 1 words of its previous use.
The score of the entire continuation is then defined as srep(y) = σ(w > r RNN(d)), (5)
where RNN(d) is the final state of a unidirectional RNN ran over the similarity scores and wr is a learned vector. The model is trained to maximize binary cross-entropy, with gold continuations as positive examples and samples from the base RNN as negative examples:
Lrep = ∑ y∈Dy log srep(y) + ∑ y′∼Plm(y′) log(1− srep(y′)). (6)
Word embeddings are kept fixed during training for this model.
3.2.3 ENTAILMENT MODEL
The goal of the entailment model is to guide the generator to neither contradict its own past generation (the maxim of Quality) nor state something that readily follows from the context (the maxim of Quantity). The latter case is driven by the RNNs habit of paraphrasing itself (alongside its tendency for more direct repetitions, which is captured by the repetition model). We train a classifier using the MultiSNLI dataset (Williams et al., 2017) that takes two sentences a and b as input and predicts the relation between them as either contradiction, entailment or neutral. We use the score of the neutral class probability of the sentence pair in order to discourage both contradiction and entailment.
We use a continuous bag-of-words model similar to previously reported NLI baselines (Mou et al., 2016). Sentence representations are obtained by summing word embeddings (which are fine-tuned during training), such that r(a) = ∑ i e(ai) and r(b) = ∑ i e(bi). An MLP classifier with a single tanh hidden layer takes as input concatenations of the two sentence embeddings, along with their element-wise difference and multiplication; the output is a three-way softmax. The log probability of the neural class is t(a,b). The classifier obtains 63% validation accuracy.
In contrast to our other communication models, this classifier cannot be applied directly to the full context and continuation sequences it is scoring. Instead every completed sentence in the continuation should be scored against all preceding sentences in both the context and continuation. Let S(y1:i) be the set of complete sentences in y1:i, and S−1(y1:i) the last complete sentence. We compute the entailment score of S−1(y1:i) against all preceding sentences in x and y, and use the score of the sentence-pair for which we have the least confidence that entailment is neutral:
sentail(x,y) = mina∈S(x)∪S:−1(y)t(a, S−1(y)). (7)
In contrast to our other models, the score this model returns for any given continuation only corresponds to a subsequence of the continuation, but as we will see below (§3.4) due to the way generation is performed with beam search this score will not be accumulated across sentences.
3.2.4 RELEVANCE MODEL
The purpose of the relevance model is to predict whether the content of a candidate continuation is relevant to the given context. We train the model to distinguish between true continuations and random continuations sampled from other (human-written) endings in the corpus, conditioned on the given context. A single convolutional layer is applied to both the context and candidate embedding sequences. The scoring function is defined as
srel = w T l · (maxpool(conv(e(x))) ◦maxpool(conv(e(y)))), (8)
where 1D maxpooling is performed over each sequence to obtain a vector representing its most important semantic dimensions. Element-wise multiplication of the context and continuation vectors will amplify similarities between the two.
We optimize the following ranking log likelihood:
Lrel = k∑ j=1 ∑ (x,yt)∈D,yr∼Dy log σ(srel(x,yt)− srel(x,yr)), (9)
i.e., the probability of the true ending yt receiving a higher score than the random ending yr. For each context and true continuation we extract k = 5 randomly selected endings as negative training examples. The model ranks true endings higher than randomly selected ones with 85% accuracy on the validation set. The trained scores do not correspond to probabilities but we scale with the logistic (sigmoid) function for compatibility with other scoring modules.
3.2.5 WORKING VOCABULARY MODEL
Given even a fragment of context (e.g. “We were excited about our beach getaway...”) certain topics become more relevant (e.g. “swimming”, “relax”, “romantic”). To capture this intuition, we predict a centroid in the embedding space from the context, which describes a neighborhood of the embedding space we expect the continuation to intersect. The score is computed as
svoc(x,y1:n) = RNN(y) ‖RNN(y)‖2 − 1 n n∑ i=1 e(yi) ‖e(yi)‖2 (10)
where RNN(x) is the final state of an RNN trained end-to-end with svoc using a discriminative loss:
Lvoc = ∑
(x,yt)∈D,yr∼Dy
log(2 + svoc(x,yt)− svoc(x,yr)). (11)
During decoding, this score is combined with the language model before sampling next word candidates, while the other models are used to rescore candidates (see §3.4). The reason for this is that this model improves diversity in the next word distribution, while encouraging continuation of topics in the context.
3.2.6 LEXICAL STYLE MODEL
The goal of the lexical style model is to learn stylistic aspects of desirable writing based on observed lexical distributions. The scoring function is defined as
sbow(y) = w T s maxpool(e(y)). (12)
The model is trained with a similar ranking criteria as the relevance model, except that the negative examples are sampled from the language model:
Lbow = ∑
yt∈Dy,ys∼Plm(ys)
log σ(sbow(yt)− sbow(ys)). (13)
3.3 MIXTURE WEIGHT LEARNING
Once all the communication models have been trained, we learn the combined objective. In particular we learn the weight coefficients λk in equation 1 to linearly combine the scoring functions, using a discriminative objective
Lmix = ∑
(x,y)∈D
log σ(f(x,y)− f(x,A(x))), (14)
where A is the inference algorithm for beam search decoding. The objective learns to rank the gold continuation above the continuation predicted by the current model. The full model is therefore trained discriminatively to classify sequences, and the same model is used to generate the sequences.
For each training iteration inference is performed based on the current values of λ. This has the effect that the objective function changes dynamically during training: As the current samples from the model are used to update the mixture weights, it creates its own learning signal by applying the generative model discriminatively.
Data: context y = y1 · · · yn, beam size k, mixture weights λ Result: best continuation best = None beam = [y] for step = 0; step< max steps; step = step +1 do
next beam = [] for candidate in beam do
expand(next beam, next k(candidate, λ)) if lookahead score(candidate.append(termination token))> best.score then
best = candidate.append(term) end
end for candidate in next beam do
candidate.score += fλ(candidate) . score with models end beam = sort(next beam, key=score)[:k] . select top k candidates
end if learning then
update lambda with gradient descent by comparing beam against the gold end return best
Algorithm 1: Inference/Learning in the Learning to Write Framework.
3.4 BEAM SEARCH
Due to the limitations of greedy decoding and the fact that our scoring functions do not decompose across time steps, we perform generation with a beam search procedure, shown in Algorithm 1. The naive approach would be to perform beam search based only on the language model, and then rescore the k best candidate completions with our full model. We found that this approach leads to limited diversity in the beam and therefore cannot exploit the strengths of the full model.
Therefore we rescore the current hypotheses in the beam with the full (partial) objective function and keep the k best candidate sequences after expanding to the next word. To further increase diversity we sample k candidate next words from the distribution when expanding a hypothesis, instead of obtaining the k highest scoring next words. In our experiments we generate text using a beam size of 8.
Two modules are handled differently during beam search: First, the working vocabulary score is integrated before sampling next word candidates, i.e. inside the next k call. Second, the entailment score of a candidate is recomputed only when sentence-terminating punctuation is generated. Otherwise the current entailment score is re-used. Due to the nature of beam-search the entailment score is not accumulated across sentences; rather we assume that the effect of the last sentence score will already be reflected in the content of the beam when the next sentence is scored. While the first sentence is generated, s′entail takes an initial constant value to avoid bias towards incomplete sentences.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Corpora We use two corpora for evaluation. The first is TripAdvisor reviews.1 We use only reviews that have at least 5 sentences, using the first 4 as the context and the remainder as the ending to be generated. There are on average 11 sentences per review. We use the first 1M reviews for training. The second is the ROCStory corpus (Mostafazadeh et al., 2016), which is a collection of crowdsourced commonsense short stories, consisting of five sentences each (98k stories in the training set). We use this corpus to train our model to predict a coherent final sentence given the first four.
For the (larger) TripAdvisor corpus, we train the language model on that corpus alone. For the ROCStory corpus, we pretrain the model for 420M tokens on the Toronto Books corpus2 before training on the ROCStory text.
1TripAdvisor corpus: http://times.cs.uiuc.edu/˜wang296/Data/ 2Toronto Books corpus: http://yknzhu.wixsite.com/mbweb
Models We use the language model from Section 3.1 (a 2-layer RNN with 1024 GRU cells per layer) as our reference baseline. We include three variants of our model: L2W (NO META-LEARN) uses a uniform mixture of scores for the learned sub-objectives (no meta-weight learning); L2W (NO WK. VOCAB) omits the working vocabulary model (Section 3.2.5) but uses mixture weight learning for the other components; and L2W (FULL) is our full model.
To analyze the suitability of the reference endings and base language model samples to train the repetition model, we define a repetition ratio as the ratio of total words to unique words. We found that ROCStory endings generated by the language model have a higher repetition ratio than the reference endings (1.12 vs. 1.02). On TripAdvisor the difference is more pronounced, with a ratio of 1.48 for the language model against 1.17 for the references.
4.2 EVALUATION SETUP
Previous work has reported that automatic measures such as BLEU, ROUGE, and Meteor, do not lead to meaningful evaluation when used for long or creative text generation where there can be high variance among correct generation output (Wiseman et al., 2017b; Vedantam et al., 2015). For open-ended generation tasks such as our own, human evaluation is the only reliable measure (Li et al., 2016b; Wiseman et al., 2017b). However, we also report the automatic measures to echo the previous observation regarding the mismatch between automatic and human evaluation. Additionally we provide multiple generation samples that give insights into the characteristics of different models.
We pose the evaluation of our model as the task of generating an appropriate ending given an initial context of n sentences. For the purposes of automatic evaluation, the ending is compared against a gold reference ending that was written by a human (drawn from the original corpus). For human evaluation, the ending is presented to a human, who assesses the text according to several criteria, which are closely inspired by Grice’s Maxims:
1. Quantity: Does the generation avoid repetition? 2. Quality: Does the generation contradict itself (or the initial context)? 3. Relation: Does the generation appropriately continue the initial context? 4. Manner: Does the generation use the same style as the initial context?
Finally, a Turing test question is presented to each judge: was the ending written by a human?
For both automatic and human evaluation, we select 1000 passages from the test set of each corpus and generate endings using all models, using the initial n = 4 sentences as context. Automatic
evaluation scores each generated ending against its reference ending. To conduct human evaluation, we present the initial context with the generation from each of the models individually to workers on Mechanical Turk, who score them using the above criteria. As a control, we also ask workers to score the original (human-written) reference endings for each selected passage.
5 RESULTS AND ANALYSIS
5.1 QUANTITATIVE ANALYSIS
Results for both automatic and human evaluation metrics are presented for both corpora in Table 2.
For the TripAdvisor corpus, the language model baseline does best on all of the automated metrics. However on all human metrics, the L2W (NO WK.VOCAB) model scores higher, and achieves nearly twice the percentage of passed Turing tests (75% compared to 39%). It scores even better than the L2W (FULL) model. Manual inspection reveals that for the TripAdvisor task, the topical focus and lexical specificity given by vocabulary scoring actually lead to slightly worse model behavior. In TripAdvisor reviews, many possible sentences could logically follow a reviews context. A well-behaving but generic language generator is better able to leave judge’s unsurprised, if also uninformed. The full model brings additional focus to the generations, but at the cost of a reduction in overall coherence.
In the ROCStory generation task the language model is more competitive with L2W (FULL). The raw language model is able to take advantage of the simpler data inherent in this task: ROCStories are shorter (the endings are always a single sentence) and have a smaller vocabulary. However, the L2W full model still performs the best on all human metrics. The ROCStory corpus presents specific contexts that require a narrower set of responses to maintain coherency. The L2W (FULL) takes advantage of reranking by matching the topic of the context, yielding a higher ‘Relation’ score and more Turing test passes overall.
The very low BLEU scores observed in our results in the TripAdvisor domain are an artifact of the BLEU metrics length penalty. The average length of reference completions is 12 sentences, which is much longer than the average length of endings generated by our Learning to Write models. This forces the BLEU score’s length penalty to drive down the scores, despite our observation that there is still a significant amount of word and phrase overlap. The completions generated by the base language model are longer on average (as it tends to repeat itself over and over) and therefore to not suffer from this problem.
5.2 QUALITATIVE ANALYSIS
Learning to Write (L2W) generations are more topical and coherently mold with the context. Table 1 shows that both the L2W system as well as the classic RNN start similarly, commenting on the hotel
staff and the room. L2W is able to condense the same content into fewer words, following the context’s style. Beginning with the third sentence, L2W and the LM diverge more significantly. The LM makes a generic comment: “The location is perfect,” while L2W goes on to list specific items it enjoyed at breakfast, which was mentioned in the context. Furthermore, the LM becomes “stuck” repeating itself—a common problem for RNNs in general—whereas the L2W system finds a natural ending.
The L2W models do not fix every degenerate characteristic of RNNs. The TripAdvisor L2W generation in Table 1, while coherent and diverse, leaves room for improvement. The need to relate to topical phrases can override fluency, as with the long list of “food, fresh fruit, juices, coffee, tea, hot chocolate.” Furthermore phrases such as “friendly and helpful” and “clean and comfortable” occur in the majority of generations. These phrases, while common, tend to have semantics that are expressed in many different surface forms in real writing (e.g., “the staff were kind and quick to respond”). Appendix A presents sample generations from all models on both corpora for further inspection.
6 RELATED WORK
Generation with Long-term Context While relatively less studied, there have been several attempts at RNN-based paragraph generation for multiple domains including image captions (Krause et al., 2017), product reviews (Lipton et al., 2015; Dong et al., 2017), sport reports (Wiseman et al., 2017a), and recipes (Kiddon et al., 2016). Most of these approaches are based on sequence-tosequence (seq-to-seq) type architectures. One effective way to avoid the degenerative characteristics of RNNsunder seq-to-seq is to augment the input with sufficient details and structure that can guide the output generation through attention and a copying mechanism. However, for many NLP generation tasks, it is not easy to obtain such the large-scale training data required to support a robust sequence-to-sequence model. For example, a recent dataset for image-to-paragraph generation contains 14,575 training pairs Krause et al. (2017), and state-of-the-art hierarchical models, while improving over strong baselines, still lead to generation that repeats, contradicts, and is generic.
Alternative Decoding Objectives The tendency of RNN generation to be short, blend, repetitive and contradictory has been noted multiple times in prior literature. A number of papers propose approaches that can be categorized as alternative decoding objectives (Shao et al., 2017). For example, Li et al. (2016a) propose a diversity-promoting objective that interpolates the conditional probability score with negative marginal or reverse conditional probabilities. Unlike the negative marginal term that blindly penalizes generic generation, all our communication models are contextual in that they compare the context with continuation candidates. Incorporating the reverse conditional probability has been also proposed through the noisy channel models of Yu et al. (2017). The clear benefit of the reverse conditional probability model is that it can prevent the explaining away problem of the long-term context. However, decoding with the reverse conditional probability is significantly more expensive, making it impractical for paragraph generation. In addition, both above approaches do not allow for learning more expressive models than our work presents. The communication models presented in our work can be easily integrated into the existing beam search procedure, and each model is lightweight to compute.
The modified decoding objective has long been a common practice in statistical machine translation literature (Koehn et al., 2003; Och, 2003; Watanabe et al., 2007; Chiang et al., 2009). Notably, it still remains a common practice with neural machine translation, even with a highly expressive network structure trained on an extremely large amount of data (Wu et al., 2016). Inspired by all above approaches, our work presents a general learning framework together with a more comprehensive set of composite communication models. Unlike previous approaches, our composite models are trained to directly address the particular limitations of the base RNN models, by integrating RNN generation into the discriminative learning. This allows for customization of the decoding objective to better cope with the undesired behavior of the base language models.
Meta Learning There has been a broad range of literature on meta learning, where the learning goal broadens the search space of learning by targeting to learn model architectures, hyperparameters, and learning algorithms that are typically hand-designed or selected. Andrychowicz et al. (2016), for example, learn the learning rate, while Zoph & Le (2016) train a neural network to gen-
erate neural network architectures. while Snoek et al. (2012) proposed Bayesian optimization to learn hyperparameters that cannot be optimized with gradient descent. The learning framework we presented here can be interpreted as a type of meta learning in a broad sense in that there are multiple layers of learning: learning the base RNN language model, then learning a collection of composite communication models that are customized to the particular behavior of the base model, and finally the generator that learns to combine all sub-components.
7 CONCLUSION
Our work presents a unified learning framework that can learn to generate long, coherent text overcoming the limitations of RNNs as text generation models. Our framework learns a decoding objective suitable for generation through a combination of sub-models that capture linguisticallymotivated qualities of good writing. Our work makes a unique contribution that complements existing literature on long text generation that is predominantly based on seq-to-seq models with a large amount of in-domain training data; we demonstrate that the fluency of general RNN language models can be successfully guided to generate more lengthy and sensical text, which can be useful for domains where in-domain data is not sufficient to support seq-to-seq type training.
We propose a general framework for learning a decoding objective in two parts: learning component models to rank candidate generations, which are motivated by different aspects of Grice’s Maxims, and learning a weighing scheme that balances the influence of each of these scoring functions. This framework is amenable to any number and kind of sub-objectives, allowing for multiple qualities of good writing to be modeled across diverse domains. Human evaluation shows that the quality of the text produced by our model exceeds that of RNN baselines by a large margin and the generations score significantly higher on a Turing test evaluation.
A EXAMPLE GENERATIONS
ROCStory Context Gina was bored in detention. She decided to text her friend mary across the room.
She knew the mean teacher wouldn’t approve. She kept her phone in her purse to hide it from the teacher.
Language Model She was so excited that she couldn’t find her phone. No Meta-Learning Mary asked her mother if she could asked her if she was students. Teacher
Learn2Write Mary was relieved when her friend came over. Human Mary texted back.
Context Jack has a big garden. He weeds it every day. Once a week he takes veggies to the market. Jack waters his garden in the morning.
Language Model He was so excited that he couldnt wait to go back to the store. No Meta-Learning Trees grow trees trees and fruit trees. Vegetables
Learn2Write He loves his garden more than anything else. Human His veggies grow large and plentiful.
Context Samantha wanted to watch a scary movie, but her mother said no. Her mother left for work, and samantha decided to watch it anyway. The movie was so scary it gave samantha nightmares for weeks! She wished shed never watched it!
Language Model She was so excited that she couldnt sleep that night. No Meta-Learning She was so excited that she couldnt sleep that night.
Learn2Write She vowed never to watch scary movies again. Human Samantha learned her lesson.
Context My friend was moving away to college. We decided to throw her a party. We all met up with cake and snacks. She was so surprised to see us!
Language Model She was so excited that she couldn’t wait to go back. No Meta-Learning Birthday party dinner was celebration! Welcome
Learn2Write We all hugged her goodbye and went home. Human She hugged everyone and cried in joy.
TripAdvisor Context The hotel staff were very kind and helpful. Our room was large but a little sparse.
The garden is really nice in the courtyard. The location is perfect, next door to the station.
Language Model If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. But if you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, you
No Meta-Learning We didn’t eat at the hotel so can’t comment Learn2Write stay there again in a heartbeat. Catermeran is a great place to stay.
Human There are no stairs to the hotel but quite a few in the hotel. Restaurants right outside the door. The only problem would be for people who are not early risers, the church bells go early ( about 7am ) are loud and go on for a while ( a few minutes ). We used it as an alarm, which was fine for us but some may not like it. The breakfast was a little adverage but no un-eatable.
Context We spent 2 nights at hotel abbazia in late march with our teenage children. The location couldn’t be better. . . . So near to the train station and the bus station. We also loved the fact that we were away from the crowds around st. marks square ( nice place to visit, but i wouldn’t want to live there : ) the hotel provided a good breakfast and there were some great restaurants in the area. ” La zucca ” was our favorite but there was another that the hotel suggested ( right across and down from the hotel towards the canal. . . Can’t remember the name ) that was also very good.
Language Model The staff were very friendly and helpful. Breakfast was included in the price and was very good. Overall, i would recommend this hotel and would No Meta-Learning The staff was very friendly and helpful. The room was very clean and the bed was very comfortable. I would definitely stay here again and would recommend it to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart
Learn2Write The room was clean and the bed was comfortable. Human We also loved the pizza at ” antico forno ” near the rialto bridge. The staff at
abbazia are fantastic. . . Very friendly and very helpful. The rooms are more than adequate and very clean. We had a family room that didn’t have a stand-up shower and that was probably the only minor negative i can think of ( a little tough for my 6’ 4 ” son to cram into the tub ) ps loved waking up to the church bells. . . . No alarm needed!
Context After reading multiple reviews for liki tiki village i was a bit nervous about staying there. We loved every minute. I wish that we had more time to enjoy the pools, but we were visiting the parks most of the time. The rooms were clean and the kitchen well stocked.
Language Model The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in the heart of the city. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. If you No Meta-Learning Our 6euros was multi-stop, vegatarian vegatarian vegatarian 6euros Learn2Write The rooms were clean and the kitchen well stocked. The staff were very friendly
and helpfu! I would definately recommend this hotel to anyone who is looking for a place to stay in the heart of san diego. Thank you to all of the staff for making our stay so enjoyable!
Human We did go to the time share presentation but we left almost immediately since we are owners elsewhere. I guess they figured out that they would be wasting their time. Great to have a washer and dryer. Also great to have a seperate room for the kids. The grounds were clean and we felt safe. Checkin was a little confusing since they gave us a map of the property and then wrote all over it in black pen so that you could not see the map. Overall it was a great experience. Close to the grocery store and to target. A little difficult to find but the gps got us there. Great for the price and amenities.
Context The location is amazingly convenient. Just 3min walk from the bus station or 1min walk from the train station and less than a 100m from the canal grande. I liked the fact that it’s at the start of the city so that you begin your venice experience slowly. I recommend walking to the san marco piazza on your first time ( make sure you wear comfortable shoes ).
Language Model The hotel is in a great location. The room was clean and comfortable. The staff was very friendly and helpful. The breakfast was very good. The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay while in
No Meta-Learning The staff were very friendly and helpful and the breakfast buffet Learn2Write The rooms are spacious and clean. Breakfast is very good.
Human Then you exit the alley and face the magnificent san marco piazza and trust me, you will be pleasantly shocked. What an experience!!! : ) ) the abbazia rooms are decent and very clean. Breakfast is poor but adequate and wi-fi is free. The garden is very peaceful and offers some very relaxing moments. I was worried about noises from the train station next door but you can’t hear a thing so no problem there. The guys at the reception are amazing. Very friendly and very helpful : ) ) what you want from a hotel in venice is a decent place to sleep, have a relaxing bath and some breakfast in the morning. From then on you will be spending all your time in town anyway so fo me the abbazia hotel was an excellent choice and i will go back for sure. Price is not cheap, but nothing is cheap in venice anyway.
B MODEL DETAILS AND HYPERPARAMETERS
Base language model We use a 2-layer RNN language model with 1024 GRU cells per layer. Embeddings vectors are of length 1024. Following Inan et al. (2017) we tie the input and output embedding layers’ parameters. To regularize we dropout (Srivastava et al., 2014) cells in the output layer of the first layer with probability 0.2. We use mini-batch stochastic gradient descent (SGD) and anneal the learning rate regularly when the validation set performance fails to improve. Learning rate, annealing rate, and batch size were tuned on the validation set for each dataset, with details in section ??. Gradients are backpropagated 35 time steps and clipped to a maximum value of 0.25.
Entailment model Dropout is performed on both the input word embeddings and the MLP hidden layer with rate 0.5. Training is performed with Adam (Kingma & Ba, 2015) with learning rate 0.0005, batch size 128.
Relevance model The convolutional layer is a 1D convolution with filter size 3 and stride 1; the sequences are padded so that the sequence output is the same length as the input. The model is trained with Adam, learning rate 0.001 and dropout is applied before the final linear layer with rate 0.5.
Meta-weight learning Training is performed with SGD with a learning rate of 1. | 1. How does the paper propose to correct inconsistencies in sequence decoding?
2. What are the strengths and weaknesses of the proposed approach in generating human-like sentences?
3. How does the paper evaluate relevance in grounded generation tasks like machine translation or image-captioning?
4. What are the concerns regarding the mismatch between the goals and the actual approach?
5. How does the length penalty encourage longer sentences violating quantity, manner, and potential relevance?
6. How does the repetition model address the issue of RNNs failing to capture long-term contextual dependencies?
7. What are the limitations of the current task evaluation, and how could it be improved?
8. How do other works like Wu et al. (and other MT works) use a coverage term as an indicator of relevance?
9. What are the concerns regarding the update regarding "learning the objective"?
10. How does the paper encode manually using a wide variety of additional objectives, and what are their limitations? | Review | Review
This paper proposes to bring together multiple inductive biases that hope to correct for inconsistencies in sequence decoding. Building on previous works that utilize modified objectives to generate sequences, this work proposes to optimize for the parameters of a pre-defined combination of various sub-objectives. The human evaluation is straight-forward and meaningful to compensate for the well-known inaccuracies of automatic evaluation.
While the paper points out that they introduce multiple inductive biases that are useful to produce human-like sentences, it is not entirely correct that the objective is being learnt as claimed in portions of the paper. I would like this point to be clarified better in the paper.
I think showing results on grounded generation tasks like machine translation or image-captioning would make a stronger case for evaluating relevance. I would like to see comparisons on these tasks.
----
After reading the paper in detail again and the replies, I am downgrading my rating for this paper. While I really like the motivation and the evaluation proposed by this work, I believe that fixing the mismatch between the goals and the actual approach will make for a stronger work.
As pointed out by other reviewers, while the goals and evaluation seem to be more aligned with Gricean maxims, some components of the objective are confusing. For instance, the length penalty encourages longer sentences violating quantity, manner (be brief) and potentially relevance. Further, the repetition model address the issue of RNNs failing to capture long-term contextual dependencies -- how much does such a modified objective affect models with attention / hierarchical models is not clear from the formulation.
As pointed out in my initial review evaluation of relevance on the current task is not entirely convincing. A very wide variety of topics are feasible for a given context sentence. Grounded generation like MT / captioning would have been a more convincing evaluation. For example, Wu et al. (and other MT works) use a coverage term and this might be one of the indicators of relevance.
Finally, I am not entirely convinced by the update regarding "learning the objective". While I agree with the authors that the objective function is being dynamically updated, the qualities of good language is encoded manually using a wide variety of additional objectives and only the relative importance of each of them is learnt. |
ICLR | Title
Learning to Write by Learning the Objective
Abstract
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
N/A
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
1 INTRODUCTION
Recurrent Neural Network (RNN) based language models such as Long Short-Term Memory Networks (LSTMs) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (GRUs) (Cho et al., 2014) have achieved enormous success across a variety of language tasks due to their ability to learn fluency patterns in natural language (Jozefowicz et al., 2016; Kim et al., 2016; Mikolov et al., 2010). When used as a generator, however, the quality of language generated from RNNs deviates drastically from that of human language. While fluent, RNN-produced language displays several degenerate characteristics, favoring generic and contentless output that tends to be repetitive and self-contradictory. These issues are especially prominent when RNNs are used for open-ended, long-form text generation, as illustrated in Figure 1.
RNNs model the conditional probability P (xt|x1, ..., xt−1) of generating the next word xt given all previous words observed or generated. In theory, this conditional model should be able to learn all crucial aspects of human language production, for example, that we don’t normally repeat the same content over and over. In practice, however, the learned conditional probability model often assigns higher probability to a repetitive, overly generic sentence than to higher quality sentences, as shown in Figure 1. We postulate that this is in part because the network architectures of RNN variants do not provide a strong enough inductive bias for the model to learn the complex communication goals pursued in human writing. In addition, long-term context easily gets lost as it is explained away in the presence of more immediately relevant short-term context (Yu et al., 2017), and as gradients diminish over a long sequence (Pascanu et al., 2013). Consequently, RNNs acquire relatively shallow and myopic patterns, which tend to only take advantage of a small fraction of the training set vocabulary Kiddon et al. (2016). RNNs are thus unable to generate language that matches the complexity and coherence of human generated text.
Several methods in the literature attempt to mitigate these issues. Overly simple and generic generation can be improved by using a diversity-boosting objective function (Shao et al., 2017; Vijayakumar et al., 2016). Repetitive generation can be reduced by prohibiting recurrence of the same trigrams as a hard rule (Paulus et al., 2017). Although such constraints form a partial solution, they
All in all, I would highly recommend this hotel to anyone who wants to be in the heart of the action, and want to be in the heart of the action. If you want to be in the heart of the action, this is not the place for you. However, If you want to be in the middle of the action, this is the place to be.
Figure 1: A Trip Advisor review generated by an RNN based LM trained on over a million reviews.
are generally too coarse and both penalize good behavior (e.g. reuse of an idiom) and fail to capture more complex bad behavior (e.g. paraphrasing of the same content again and again).
Hand tailoring rules is both time consuming and unstable across different generative scenarios, so we instead propose a general learning framework to construct a better decoding objective. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address limitations of the base RNN generator. Our learning framework therefore generalizes over various existing modifications to the decoding objective. Our approach learns to overcome the particular limitations of the RNN generator directly by incorporating language generated from RNNs as negative samples to discriminatively train several companion models, each specializing in a different aspect of Grice’s Maxims of communication (Grice et al. (1975)).
Empirical results demonstrate that our learning framework is highly effective in converting a generic RNN language model into a substantially stronger generator. Human evaluation confirms that language generated by our model is preferred over that of competitive baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
2 BACKGROUND: GRICE’S MAXIMS
We motivate our learning framework using Grice’s Maxims of communication (Grice et al., 1975):
1. Quantity: Make your contribution as informative as required, and no more than necessary. RNN generations tend to violate this maxim as they are often overly short and generic, and therefore less informative than desired. When encouraged to generate longer text, RNNs easily repeat themselves, which also works against conveying the right amount of information. This observation motivates the length and repetition models in §3.2.1 and §3.2.2.
2. Quality: Do not say what you believe to be false. When used for generation with long term context, RNNs often generate text that is self-contradictory. We propose an entailment model to address this problem in §3.2.3.
3. Relation: Be relevant. RNNs used for long generation can start digressing as the long-term context gets easily washed out. We address this problem by proposing two different relevance models in §3.2.4 and §3.2.5.
4. Manner: Avoid obscurity of expression. Avoid ambiguity. Be brief. Be orderly. As RNN generation favors highly probable but generic words and phrases, it lacks adequate style and specificity. We address this issue with the lexical style model in §3.2.6.
3 THE LEARNING FRAMEWORK
We propose a general learning framework for conditional language generation of sequence y given context x. The decoding objective for generation takes the general form:
f(x,y) = log(Plm(y|x)) + ∑ k λksk(x,y). (1)
This objective combines the RNN language model probability Plm (§3.1) with a set of additional scores sk(x,y) produced by discriminatively trained communication models (§3.2) and weighted
with learned mixture coefficients λk (§3.3). This corresponds to a Product of Experts (PoE) model (Hinton, 2006) when the scores sk are log probabilities. Further model and hyperparameter details are given in appendix B.
Generation is performed using beam search (§3.4), scoring partially generated candidate generations y1:i at each time step i. The RNN language model decomposes into per-word probabilities using the chain rule. However, in order to allow for more expressivity over long range context we do not require the discriminative model scores to factorize over the elements of y, thereby addressing a key limitation of RNNs. More specifically, we use an estimated score s′k(x,y1:i) that can be computed for any prefix of y = y1:n to approximate the objective during beam search, such that s′k(x,y1:n) = sk(x,y). The scoring functions are trained on prefixes of y (except where stated otherwise) to simulate their application to rank partial continuations during beam search.
3.1 BASE LANGUAGE MODEL
We train an RNN language model to estimate logPlm(s) = ∑ i logPlm(si|s1:i−1). (2)
The language model treats the context x and the continuation y as a single sequence s, in contrast to sequence to sequence models which distinguish between them.
3.2 COMPOSITE COMMUNICATION MODELS
Next we introduce a set of models motivated by Grice’s Maxims of communication. Each model is trained to discriminate between good and bad generation using a ranking objective, so that its model scores have the effect of re-ranking in the decoder. We vary the model parameterization and training examples to guide each model to focus on different aspects of Grice’s Maxims. The classifier scores are interpreted as classification probabilities and added to the objective function as log probabilities.
Let D = {(x1,y1), . . . (xn,yn)} be the set of training examples for conditional generation. Dx denote all contexts and Dy all continuations.
In all models the first layer embeds words into 300-dimensional vectors initialized with GloVe (Pennington et al., 2014) embeddings, which are fine-tuned during training unless stated otherwise. The dimensions of hidden layers are also 300. Let e(w) be the word embedding of word w.
3.2.1 LENGTH MODEL
RNNs tend to bias toward shorter generation even with a highly expressive network structure with attention, thus length normalization is still a common practice (Wu et al., 2016). We use a geometrically decaying length reward. We tune the initial value and decaying rate on the final systems, and use the same value for all systems with the initial value and decay rate being 1 and 0.9 respectively. The score per a time step (for a partially generated completion) is thus
slen(y1:i) = 0.9 i. (3)
3.2.2 REPETITION MODEL
The goal of this model is to learn to distinguish between RNN-generated and gold continuations by exploiting our empirical observation that repetitions are more common in completions generated by the RNN language model. We quantify the claim that samples from the base RNN is an appropriate source of negative examples for repetition detection on our datasets in §4.1. However, we do not want to completely discourage repetition, as words sometimes do recur in English. Thus we model natural levels of repetition as follows.
First a score di is computed for each token in the continuation y based on pairwise cosine similarity of word embeddings within a fixed window of the previous k words:
di = max j=i−k...i−1 (CosSim(e(yi), e(yj))). (4)
This score can be interpreted as the repetition “strength” of the sequence at position i, where di = 1 for all positions at which a word is repeated within k + 1 words of its previous use.
The score of the entire continuation is then defined as srep(y) = σ(w > r RNN(d)), (5)
where RNN(d) is the final state of a unidirectional RNN ran over the similarity scores and wr is a learned vector. The model is trained to maximize binary cross-entropy, with gold continuations as positive examples and samples from the base RNN as negative examples:
Lrep = ∑ y∈Dy log srep(y) + ∑ y′∼Plm(y′) log(1− srep(y′)). (6)
Word embeddings are kept fixed during training for this model.
3.2.3 ENTAILMENT MODEL
The goal of the entailment model is to guide the generator to neither contradict its own past generation (the maxim of Quality) nor state something that readily follows from the context (the maxim of Quantity). The latter case is driven by the RNNs habit of paraphrasing itself (alongside its tendency for more direct repetitions, which is captured by the repetition model). We train a classifier using the MultiSNLI dataset (Williams et al., 2017) that takes two sentences a and b as input and predicts the relation between them as either contradiction, entailment or neutral. We use the score of the neutral class probability of the sentence pair in order to discourage both contradiction and entailment.
We use a continuous bag-of-words model similar to previously reported NLI baselines (Mou et al., 2016). Sentence representations are obtained by summing word embeddings (which are fine-tuned during training), such that r(a) = ∑ i e(ai) and r(b) = ∑ i e(bi). An MLP classifier with a single tanh hidden layer takes as input concatenations of the two sentence embeddings, along with their element-wise difference and multiplication; the output is a three-way softmax. The log probability of the neural class is t(a,b). The classifier obtains 63% validation accuracy.
In contrast to our other communication models, this classifier cannot be applied directly to the full context and continuation sequences it is scoring. Instead every completed sentence in the continuation should be scored against all preceding sentences in both the context and continuation. Let S(y1:i) be the set of complete sentences in y1:i, and S−1(y1:i) the last complete sentence. We compute the entailment score of S−1(y1:i) against all preceding sentences in x and y, and use the score of the sentence-pair for which we have the least confidence that entailment is neutral:
sentail(x,y) = mina∈S(x)∪S:−1(y)t(a, S−1(y)). (7)
In contrast to our other models, the score this model returns for any given continuation only corresponds to a subsequence of the continuation, but as we will see below (§3.4) due to the way generation is performed with beam search this score will not be accumulated across sentences.
3.2.4 RELEVANCE MODEL
The purpose of the relevance model is to predict whether the content of a candidate continuation is relevant to the given context. We train the model to distinguish between true continuations and random continuations sampled from other (human-written) endings in the corpus, conditioned on the given context. A single convolutional layer is applied to both the context and candidate embedding sequences. The scoring function is defined as
srel = w T l · (maxpool(conv(e(x))) ◦maxpool(conv(e(y)))), (8)
where 1D maxpooling is performed over each sequence to obtain a vector representing its most important semantic dimensions. Element-wise multiplication of the context and continuation vectors will amplify similarities between the two.
We optimize the following ranking log likelihood:
Lrel = k∑ j=1 ∑ (x,yt)∈D,yr∼Dy log σ(srel(x,yt)− srel(x,yr)), (9)
i.e., the probability of the true ending yt receiving a higher score than the random ending yr. For each context and true continuation we extract k = 5 randomly selected endings as negative training examples. The model ranks true endings higher than randomly selected ones with 85% accuracy on the validation set. The trained scores do not correspond to probabilities but we scale with the logistic (sigmoid) function for compatibility with other scoring modules.
3.2.5 WORKING VOCABULARY MODEL
Given even a fragment of context (e.g. “We were excited about our beach getaway...”) certain topics become more relevant (e.g. “swimming”, “relax”, “romantic”). To capture this intuition, we predict a centroid in the embedding space from the context, which describes a neighborhood of the embedding space we expect the continuation to intersect. The score is computed as
svoc(x,y1:n) = RNN(y) ‖RNN(y)‖2 − 1 n n∑ i=1 e(yi) ‖e(yi)‖2 (10)
where RNN(x) is the final state of an RNN trained end-to-end with svoc using a discriminative loss:
Lvoc = ∑
(x,yt)∈D,yr∼Dy
log(2 + svoc(x,yt)− svoc(x,yr)). (11)
During decoding, this score is combined with the language model before sampling next word candidates, while the other models are used to rescore candidates (see §3.4). The reason for this is that this model improves diversity in the next word distribution, while encouraging continuation of topics in the context.
3.2.6 LEXICAL STYLE MODEL
The goal of the lexical style model is to learn stylistic aspects of desirable writing based on observed lexical distributions. The scoring function is defined as
sbow(y) = w T s maxpool(e(y)). (12)
The model is trained with a similar ranking criteria as the relevance model, except that the negative examples are sampled from the language model:
Lbow = ∑
yt∈Dy,ys∼Plm(ys)
log σ(sbow(yt)− sbow(ys)). (13)
3.3 MIXTURE WEIGHT LEARNING
Once all the communication models have been trained, we learn the combined objective. In particular we learn the weight coefficients λk in equation 1 to linearly combine the scoring functions, using a discriminative objective
Lmix = ∑
(x,y)∈D
log σ(f(x,y)− f(x,A(x))), (14)
where A is the inference algorithm for beam search decoding. The objective learns to rank the gold continuation above the continuation predicted by the current model. The full model is therefore trained discriminatively to classify sequences, and the same model is used to generate the sequences.
For each training iteration inference is performed based on the current values of λ. This has the effect that the objective function changes dynamically during training: As the current samples from the model are used to update the mixture weights, it creates its own learning signal by applying the generative model discriminatively.
Data: context y = y1 · · · yn, beam size k, mixture weights λ Result: best continuation best = None beam = [y] for step = 0; step< max steps; step = step +1 do
next beam = [] for candidate in beam do
expand(next beam, next k(candidate, λ)) if lookahead score(candidate.append(termination token))> best.score then
best = candidate.append(term) end
end for candidate in next beam do
candidate.score += fλ(candidate) . score with models end beam = sort(next beam, key=score)[:k] . select top k candidates
end if learning then
update lambda with gradient descent by comparing beam against the gold end return best
Algorithm 1: Inference/Learning in the Learning to Write Framework.
3.4 BEAM SEARCH
Due to the limitations of greedy decoding and the fact that our scoring functions do not decompose across time steps, we perform generation with a beam search procedure, shown in Algorithm 1. The naive approach would be to perform beam search based only on the language model, and then rescore the k best candidate completions with our full model. We found that this approach leads to limited diversity in the beam and therefore cannot exploit the strengths of the full model.
Therefore we rescore the current hypotheses in the beam with the full (partial) objective function and keep the k best candidate sequences after expanding to the next word. To further increase diversity we sample k candidate next words from the distribution when expanding a hypothesis, instead of obtaining the k highest scoring next words. In our experiments we generate text using a beam size of 8.
Two modules are handled differently during beam search: First, the working vocabulary score is integrated before sampling next word candidates, i.e. inside the next k call. Second, the entailment score of a candidate is recomputed only when sentence-terminating punctuation is generated. Otherwise the current entailment score is re-used. Due to the nature of beam-search the entailment score is not accumulated across sentences; rather we assume that the effect of the last sentence score will already be reflected in the content of the beam when the next sentence is scored. While the first sentence is generated, s′entail takes an initial constant value to avoid bias towards incomplete sentences.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Corpora We use two corpora for evaluation. The first is TripAdvisor reviews.1 We use only reviews that have at least 5 sentences, using the first 4 as the context and the remainder as the ending to be generated. There are on average 11 sentences per review. We use the first 1M reviews for training. The second is the ROCStory corpus (Mostafazadeh et al., 2016), which is a collection of crowdsourced commonsense short stories, consisting of five sentences each (98k stories in the training set). We use this corpus to train our model to predict a coherent final sentence given the first four.
For the (larger) TripAdvisor corpus, we train the language model on that corpus alone. For the ROCStory corpus, we pretrain the model for 420M tokens on the Toronto Books corpus2 before training on the ROCStory text.
1TripAdvisor corpus: http://times.cs.uiuc.edu/˜wang296/Data/ 2Toronto Books corpus: http://yknzhu.wixsite.com/mbweb
Models We use the language model from Section 3.1 (a 2-layer RNN with 1024 GRU cells per layer) as our reference baseline. We include three variants of our model: L2W (NO META-LEARN) uses a uniform mixture of scores for the learned sub-objectives (no meta-weight learning); L2W (NO WK. VOCAB) omits the working vocabulary model (Section 3.2.5) but uses mixture weight learning for the other components; and L2W (FULL) is our full model.
To analyze the suitability of the reference endings and base language model samples to train the repetition model, we define a repetition ratio as the ratio of total words to unique words. We found that ROCStory endings generated by the language model have a higher repetition ratio than the reference endings (1.12 vs. 1.02). On TripAdvisor the difference is more pronounced, with a ratio of 1.48 for the language model against 1.17 for the references.
4.2 EVALUATION SETUP
Previous work has reported that automatic measures such as BLEU, ROUGE, and Meteor, do not lead to meaningful evaluation when used for long or creative text generation where there can be high variance among correct generation output (Wiseman et al., 2017b; Vedantam et al., 2015). For open-ended generation tasks such as our own, human evaluation is the only reliable measure (Li et al., 2016b; Wiseman et al., 2017b). However, we also report the automatic measures to echo the previous observation regarding the mismatch between automatic and human evaluation. Additionally we provide multiple generation samples that give insights into the characteristics of different models.
We pose the evaluation of our model as the task of generating an appropriate ending given an initial context of n sentences. For the purposes of automatic evaluation, the ending is compared against a gold reference ending that was written by a human (drawn from the original corpus). For human evaluation, the ending is presented to a human, who assesses the text according to several criteria, which are closely inspired by Grice’s Maxims:
1. Quantity: Does the generation avoid repetition? 2. Quality: Does the generation contradict itself (or the initial context)? 3. Relation: Does the generation appropriately continue the initial context? 4. Manner: Does the generation use the same style as the initial context?
Finally, a Turing test question is presented to each judge: was the ending written by a human?
For both automatic and human evaluation, we select 1000 passages from the test set of each corpus and generate endings using all models, using the initial n = 4 sentences as context. Automatic
evaluation scores each generated ending against its reference ending. To conduct human evaluation, we present the initial context with the generation from each of the models individually to workers on Mechanical Turk, who score them using the above criteria. As a control, we also ask workers to score the original (human-written) reference endings for each selected passage.
5 RESULTS AND ANALYSIS
5.1 QUANTITATIVE ANALYSIS
Results for both automatic and human evaluation metrics are presented for both corpora in Table 2.
For the TripAdvisor corpus, the language model baseline does best on all of the automated metrics. However on all human metrics, the L2W (NO WK.VOCAB) model scores higher, and achieves nearly twice the percentage of passed Turing tests (75% compared to 39%). It scores even better than the L2W (FULL) model. Manual inspection reveals that for the TripAdvisor task, the topical focus and lexical specificity given by vocabulary scoring actually lead to slightly worse model behavior. In TripAdvisor reviews, many possible sentences could logically follow a reviews context. A well-behaving but generic language generator is better able to leave judge’s unsurprised, if also uninformed. The full model brings additional focus to the generations, but at the cost of a reduction in overall coherence.
In the ROCStory generation task the language model is more competitive with L2W (FULL). The raw language model is able to take advantage of the simpler data inherent in this task: ROCStories are shorter (the endings are always a single sentence) and have a smaller vocabulary. However, the L2W full model still performs the best on all human metrics. The ROCStory corpus presents specific contexts that require a narrower set of responses to maintain coherency. The L2W (FULL) takes advantage of reranking by matching the topic of the context, yielding a higher ‘Relation’ score and more Turing test passes overall.
The very low BLEU scores observed in our results in the TripAdvisor domain are an artifact of the BLEU metrics length penalty. The average length of reference completions is 12 sentences, which is much longer than the average length of endings generated by our Learning to Write models. This forces the BLEU score’s length penalty to drive down the scores, despite our observation that there is still a significant amount of word and phrase overlap. The completions generated by the base language model are longer on average (as it tends to repeat itself over and over) and therefore to not suffer from this problem.
5.2 QUALITATIVE ANALYSIS
Learning to Write (L2W) generations are more topical and coherently mold with the context. Table 1 shows that both the L2W system as well as the classic RNN start similarly, commenting on the hotel
staff and the room. L2W is able to condense the same content into fewer words, following the context’s style. Beginning with the third sentence, L2W and the LM diverge more significantly. The LM makes a generic comment: “The location is perfect,” while L2W goes on to list specific items it enjoyed at breakfast, which was mentioned in the context. Furthermore, the LM becomes “stuck” repeating itself—a common problem for RNNs in general—whereas the L2W system finds a natural ending.
The L2W models do not fix every degenerate characteristic of RNNs. The TripAdvisor L2W generation in Table 1, while coherent and diverse, leaves room for improvement. The need to relate to topical phrases can override fluency, as with the long list of “food, fresh fruit, juices, coffee, tea, hot chocolate.” Furthermore phrases such as “friendly and helpful” and “clean and comfortable” occur in the majority of generations. These phrases, while common, tend to have semantics that are expressed in many different surface forms in real writing (e.g., “the staff were kind and quick to respond”). Appendix A presents sample generations from all models on both corpora for further inspection.
6 RELATED WORK
Generation with Long-term Context While relatively less studied, there have been several attempts at RNN-based paragraph generation for multiple domains including image captions (Krause et al., 2017), product reviews (Lipton et al., 2015; Dong et al., 2017), sport reports (Wiseman et al., 2017a), and recipes (Kiddon et al., 2016). Most of these approaches are based on sequence-tosequence (seq-to-seq) type architectures. One effective way to avoid the degenerative characteristics of RNNsunder seq-to-seq is to augment the input with sufficient details and structure that can guide the output generation through attention and a copying mechanism. However, for many NLP generation tasks, it is not easy to obtain such the large-scale training data required to support a robust sequence-to-sequence model. For example, a recent dataset for image-to-paragraph generation contains 14,575 training pairs Krause et al. (2017), and state-of-the-art hierarchical models, while improving over strong baselines, still lead to generation that repeats, contradicts, and is generic.
Alternative Decoding Objectives The tendency of RNN generation to be short, blend, repetitive and contradictory has been noted multiple times in prior literature. A number of papers propose approaches that can be categorized as alternative decoding objectives (Shao et al., 2017). For example, Li et al. (2016a) propose a diversity-promoting objective that interpolates the conditional probability score with negative marginal or reverse conditional probabilities. Unlike the negative marginal term that blindly penalizes generic generation, all our communication models are contextual in that they compare the context with continuation candidates. Incorporating the reverse conditional probability has been also proposed through the noisy channel models of Yu et al. (2017). The clear benefit of the reverse conditional probability model is that it can prevent the explaining away problem of the long-term context. However, decoding with the reverse conditional probability is significantly more expensive, making it impractical for paragraph generation. In addition, both above approaches do not allow for learning more expressive models than our work presents. The communication models presented in our work can be easily integrated into the existing beam search procedure, and each model is lightweight to compute.
The modified decoding objective has long been a common practice in statistical machine translation literature (Koehn et al., 2003; Och, 2003; Watanabe et al., 2007; Chiang et al., 2009). Notably, it still remains a common practice with neural machine translation, even with a highly expressive network structure trained on an extremely large amount of data (Wu et al., 2016). Inspired by all above approaches, our work presents a general learning framework together with a more comprehensive set of composite communication models. Unlike previous approaches, our composite models are trained to directly address the particular limitations of the base RNN models, by integrating RNN generation into the discriminative learning. This allows for customization of the decoding objective to better cope with the undesired behavior of the base language models.
Meta Learning There has been a broad range of literature on meta learning, where the learning goal broadens the search space of learning by targeting to learn model architectures, hyperparameters, and learning algorithms that are typically hand-designed or selected. Andrychowicz et al. (2016), for example, learn the learning rate, while Zoph & Le (2016) train a neural network to gen-
erate neural network architectures. while Snoek et al. (2012) proposed Bayesian optimization to learn hyperparameters that cannot be optimized with gradient descent. The learning framework we presented here can be interpreted as a type of meta learning in a broad sense in that there are multiple layers of learning: learning the base RNN language model, then learning a collection of composite communication models that are customized to the particular behavior of the base model, and finally the generator that learns to combine all sub-components.
7 CONCLUSION
Our work presents a unified learning framework that can learn to generate long, coherent text overcoming the limitations of RNNs as text generation models. Our framework learns a decoding objective suitable for generation through a combination of sub-models that capture linguisticallymotivated qualities of good writing. Our work makes a unique contribution that complements existing literature on long text generation that is predominantly based on seq-to-seq models with a large amount of in-domain training data; we demonstrate that the fluency of general RNN language models can be successfully guided to generate more lengthy and sensical text, which can be useful for domains where in-domain data is not sufficient to support seq-to-seq type training.
We propose a general framework for learning a decoding objective in two parts: learning component models to rank candidate generations, which are motivated by different aspects of Grice’s Maxims, and learning a weighing scheme that balances the influence of each of these scoring functions. This framework is amenable to any number and kind of sub-objectives, allowing for multiple qualities of good writing to be modeled across diverse domains. Human evaluation shows that the quality of the text produced by our model exceeds that of RNN baselines by a large margin and the generations score significantly higher on a Turing test evaluation.
A EXAMPLE GENERATIONS
ROCStory Context Gina was bored in detention. She decided to text her friend mary across the room.
She knew the mean teacher wouldn’t approve. She kept her phone in her purse to hide it from the teacher.
Language Model She was so excited that she couldn’t find her phone. No Meta-Learning Mary asked her mother if she could asked her if she was students. Teacher
Learn2Write Mary was relieved when her friend came over. Human Mary texted back.
Context Jack has a big garden. He weeds it every day. Once a week he takes veggies to the market. Jack waters his garden in the morning.
Language Model He was so excited that he couldnt wait to go back to the store. No Meta-Learning Trees grow trees trees and fruit trees. Vegetables
Learn2Write He loves his garden more than anything else. Human His veggies grow large and plentiful.
Context Samantha wanted to watch a scary movie, but her mother said no. Her mother left for work, and samantha decided to watch it anyway. The movie was so scary it gave samantha nightmares for weeks! She wished shed never watched it!
Language Model She was so excited that she couldnt sleep that night. No Meta-Learning She was so excited that she couldnt sleep that night.
Learn2Write She vowed never to watch scary movies again. Human Samantha learned her lesson.
Context My friend was moving away to college. We decided to throw her a party. We all met up with cake and snacks. She was so surprised to see us!
Language Model She was so excited that she couldn’t wait to go back. No Meta-Learning Birthday party dinner was celebration! Welcome
Learn2Write We all hugged her goodbye and went home. Human She hugged everyone and cried in joy.
TripAdvisor Context The hotel staff were very kind and helpful. Our room was large but a little sparse.
The garden is really nice in the courtyard. The location is perfect, next door to the station.
Language Model If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. But if you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, you
No Meta-Learning We didn’t eat at the hotel so can’t comment Learn2Write stay there again in a heartbeat. Catermeran is a great place to stay.
Human There are no stairs to the hotel but quite a few in the hotel. Restaurants right outside the door. The only problem would be for people who are not early risers, the church bells go early ( about 7am ) are loud and go on for a while ( a few minutes ). We used it as an alarm, which was fine for us but some may not like it. The breakfast was a little adverage but no un-eatable.
Context We spent 2 nights at hotel abbazia in late march with our teenage children. The location couldn’t be better. . . . So near to the train station and the bus station. We also loved the fact that we were away from the crowds around st. marks square ( nice place to visit, but i wouldn’t want to live there : ) the hotel provided a good breakfast and there were some great restaurants in the area. ” La zucca ” was our favorite but there was another that the hotel suggested ( right across and down from the hotel towards the canal. . . Can’t remember the name ) that was also very good.
Language Model The staff were very friendly and helpful. Breakfast was included in the price and was very good. Overall, i would recommend this hotel and would No Meta-Learning The staff was very friendly and helpful. The room was very clean and the bed was very comfortable. I would definitely stay here again and would recommend it to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart
Learn2Write The room was clean and the bed was comfortable. Human We also loved the pizza at ” antico forno ” near the rialto bridge. The staff at
abbazia are fantastic. . . Very friendly and very helpful. The rooms are more than adequate and very clean. We had a family room that didn’t have a stand-up shower and that was probably the only minor negative i can think of ( a little tough for my 6’ 4 ” son to cram into the tub ) ps loved waking up to the church bells. . . . No alarm needed!
Context After reading multiple reviews for liki tiki village i was a bit nervous about staying there. We loved every minute. I wish that we had more time to enjoy the pools, but we were visiting the parks most of the time. The rooms were clean and the kitchen well stocked.
Language Model The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in the heart of the city. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. If you No Meta-Learning Our 6euros was multi-stop, vegatarian vegatarian vegatarian 6euros Learn2Write The rooms were clean and the kitchen well stocked. The staff were very friendly
and helpfu! I would definately recommend this hotel to anyone who is looking for a place to stay in the heart of san diego. Thank you to all of the staff for making our stay so enjoyable!
Human We did go to the time share presentation but we left almost immediately since we are owners elsewhere. I guess they figured out that they would be wasting their time. Great to have a washer and dryer. Also great to have a seperate room for the kids. The grounds were clean and we felt safe. Checkin was a little confusing since they gave us a map of the property and then wrote all over it in black pen so that you could not see the map. Overall it was a great experience. Close to the grocery store and to target. A little difficult to find but the gps got us there. Great for the price and amenities.
Context The location is amazingly convenient. Just 3min walk from the bus station or 1min walk from the train station and less than a 100m from the canal grande. I liked the fact that it’s at the start of the city so that you begin your venice experience slowly. I recommend walking to the san marco piazza on your first time ( make sure you wear comfortable shoes ).
Language Model The hotel is in a great location. The room was clean and comfortable. The staff was very friendly and helpful. The breakfast was very good. The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay while in
No Meta-Learning The staff were very friendly and helpful and the breakfast buffet Learn2Write The rooms are spacious and clean. Breakfast is very good.
Human Then you exit the alley and face the magnificent san marco piazza and trust me, you will be pleasantly shocked. What an experience!!! : ) ) the abbazia rooms are decent and very clean. Breakfast is poor but adequate and wi-fi is free. The garden is very peaceful and offers some very relaxing moments. I was worried about noises from the train station next door but you can’t hear a thing so no problem there. The guys at the reception are amazing. Very friendly and very helpful : ) ) what you want from a hotel in venice is a decent place to sleep, have a relaxing bath and some breakfast in the morning. From then on you will be spending all your time in town anyway so fo me the abbazia hotel was an excellent choice and i will go back for sure. Price is not cheap, but nothing is cheap in venice anyway.
B MODEL DETAILS AND HYPERPARAMETERS
Base language model We use a 2-layer RNN language model with 1024 GRU cells per layer. Embeddings vectors are of length 1024. Following Inan et al. (2017) we tie the input and output embedding layers’ parameters. To regularize we dropout (Srivastava et al., 2014) cells in the output layer of the first layer with probability 0.2. We use mini-batch stochastic gradient descent (SGD) and anneal the learning rate regularly when the validation set performance fails to improve. Learning rate, annealing rate, and batch size were tuned on the validation set for each dataset, with details in section ??. Gradients are backpropagated 35 time steps and clipped to a maximum value of 0.25.
Entailment model Dropout is performed on both the input word embeddings and the MLP hidden layer with rate 0.5. Training is performed with Adam (Kingma & Ba, 2015) with learning rate 0.0005, batch size 128.
Relevance model The convolutional layer is a 1D convolution with filter size 3 and stride 1; the sequences are padded so that the sequence output is the same length as the input. The model is trained with Adam, learning rate 0.001 and dropout is applied before the final linear layer with rate 0.5.
Meta-weight learning Training is performed with SGD with a learning rate of 1. | 1. What are the limitations of the RNN language model in generating high-quality languages?
2. How does the proposed approach combining RNN with other discriminatively trained models improve language generation?
3. What is the significance of Grice's Maxims of communication in improving language generation?
4. What are some concerns regarding the negative sampling method used in the repetition model?
5. How effective is the entailment model in handling repetition and paraphrasing issues?
6. How can the equation 6 and related text be better explained and intuitively represented?
7. Why are the automated bleu scores of L2W algorithm for Tripadvisor very low?
8. How reliable are the human judgment scores, and what would be a better way to obtain them?
9. How do the different models in the objective influence the actual language generation, and how can this be analyzed further? | Review | Review
This paper argues that the objective of RNN is not expressive enough to capture the good generation quality. In order to address the problems of RNN in generating languages, this paper combines the RNN language model with several other discriminatively trained models, and the weight for each sub model is learned through beam search.
I like the idea of using Grice’s Maxims of communication to improve the language generation. Human evaluation shows significant improvement over the baseline. I have some detailed comments as follows:
- The repetition model uses the samples from the base RNNs as negative examples. More analysis is needed to show it is a good negative sampling method.
- As Section 3.2.3 introduced, “the unwanted entailment cases include repetitions and paraphrasing”. Does it mean the entailment model also handles repetition problem? Do we still need a separate repetition model? How about a separate paraphrasing model?
- Equation 6 and the related text are not very clearly represented. It would be better to add more intuition and better explained.
- In the Table 2, the automated bleu scores of L2W algorithm for Tripadvisor is very low (0.34 against 24.11). Is this normal? More explanation is needed here.
- For human judgement, how many scores does each example get? It would be better to get multiple workers on M-Turk to label the same example, and compute the mean and variance. One score per example may not be reliable.
- It would be interesting to see deeper analysis about how each model in the objectives influence the actual language generation. |
ICLR | Title
Learning to Write by Learning the Objective
Abstract
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
N/A
Recurrent Neural Networks (RNNs) are powerful autoregressive sequence models for learning prevalent patterns in natural language. Yet language generated by RNNs often shows several degenerate characteristics that are uncommon in human language; while fluent, RNN language production can be overly generic, repetitive, and even self-contradictory. We postulate that the objective function optimized by RNN language models, which amounts to the overall perplexity of a text, is not expressive enough to capture the abstract qualities of good generation such as Grice’s Maxims. In this paper, we introduce a general learning framework that can construct a decoding objective better suited for generation. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address the limitations of RNN generation. Human evaluation demonstrates that text generated by the resulting system is preferred over that of baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
1 INTRODUCTION
Recurrent Neural Network (RNN) based language models such as Long Short-Term Memory Networks (LSTMs) (Hochreiter & Schmidhuber, 1997) and Gated Recurrent Units (GRUs) (Cho et al., 2014) have achieved enormous success across a variety of language tasks due to their ability to learn fluency patterns in natural language (Jozefowicz et al., 2016; Kim et al., 2016; Mikolov et al., 2010). When used as a generator, however, the quality of language generated from RNNs deviates drastically from that of human language. While fluent, RNN-produced language displays several degenerate characteristics, favoring generic and contentless output that tends to be repetitive and self-contradictory. These issues are especially prominent when RNNs are used for open-ended, long-form text generation, as illustrated in Figure 1.
RNNs model the conditional probability P (xt|x1, ..., xt−1) of generating the next word xt given all previous words observed or generated. In theory, this conditional model should be able to learn all crucial aspects of human language production, for example, that we don’t normally repeat the same content over and over. In practice, however, the learned conditional probability model often assigns higher probability to a repetitive, overly generic sentence than to higher quality sentences, as shown in Figure 1. We postulate that this is in part because the network architectures of RNN variants do not provide a strong enough inductive bias for the model to learn the complex communication goals pursued in human writing. In addition, long-term context easily gets lost as it is explained away in the presence of more immediately relevant short-term context (Yu et al., 2017), and as gradients diminish over a long sequence (Pascanu et al., 2013). Consequently, RNNs acquire relatively shallow and myopic patterns, which tend to only take advantage of a small fraction of the training set vocabulary Kiddon et al. (2016). RNNs are thus unable to generate language that matches the complexity and coherence of human generated text.
Several methods in the literature attempt to mitigate these issues. Overly simple and generic generation can be improved by using a diversity-boosting objective function (Shao et al., 2017; Vijayakumar et al., 2016). Repetitive generation can be reduced by prohibiting recurrence of the same trigrams as a hard rule (Paulus et al., 2017). Although such constraints form a partial solution, they
All in all, I would highly recommend this hotel to anyone who wants to be in the heart of the action, and want to be in the heart of the action. If you want to be in the heart of the action, this is not the place for you. However, If you want to be in the middle of the action, this is the place to be.
Figure 1: A Trip Advisor review generated by an RNN based LM trained on over a million reviews.
are generally too coarse and both penalize good behavior (e.g. reuse of an idiom) and fail to capture more complex bad behavior (e.g. paraphrasing of the same content again and again).
Hand tailoring rules is both time consuming and unstable across different generative scenarios, so we instead propose a general learning framework to construct a better decoding objective. Starting with a generatively trained RNN language model, our framework learns to construct a substantially stronger generator by combining several discriminatively trained models that can collectively address limitations of the base RNN generator. Our learning framework therefore generalizes over various existing modifications to the decoding objective. Our approach learns to overcome the particular limitations of the RNN generator directly by incorporating language generated from RNNs as negative samples to discriminatively train several companion models, each specializing in a different aspect of Grice’s Maxims of communication (Grice et al. (1975)).
Empirical results demonstrate that our learning framework is highly effective in converting a generic RNN language model into a substantially stronger generator. Human evaluation confirms that language generated by our model is preferred over that of competitive baselines by a large margin and significantly enhances the overall coherence, style, and information content of the generated text.
2 BACKGROUND: GRICE’S MAXIMS
We motivate our learning framework using Grice’s Maxims of communication (Grice et al., 1975):
1. Quantity: Make your contribution as informative as required, and no more than necessary. RNN generations tend to violate this maxim as they are often overly short and generic, and therefore less informative than desired. When encouraged to generate longer text, RNNs easily repeat themselves, which also works against conveying the right amount of information. This observation motivates the length and repetition models in §3.2.1 and §3.2.2.
2. Quality: Do not say what you believe to be false. When used for generation with long term context, RNNs often generate text that is self-contradictory. We propose an entailment model to address this problem in §3.2.3.
3. Relation: Be relevant. RNNs used for long generation can start digressing as the long-term context gets easily washed out. We address this problem by proposing two different relevance models in §3.2.4 and §3.2.5.
4. Manner: Avoid obscurity of expression. Avoid ambiguity. Be brief. Be orderly. As RNN generation favors highly probable but generic words and phrases, it lacks adequate style and specificity. We address this issue with the lexical style model in §3.2.6.
3 THE LEARNING FRAMEWORK
We propose a general learning framework for conditional language generation of sequence y given context x. The decoding objective for generation takes the general form:
f(x,y) = log(Plm(y|x)) + ∑ k λksk(x,y). (1)
This objective combines the RNN language model probability Plm (§3.1) with a set of additional scores sk(x,y) produced by discriminatively trained communication models (§3.2) and weighted
with learned mixture coefficients λk (§3.3). This corresponds to a Product of Experts (PoE) model (Hinton, 2006) when the scores sk are log probabilities. Further model and hyperparameter details are given in appendix B.
Generation is performed using beam search (§3.4), scoring partially generated candidate generations y1:i at each time step i. The RNN language model decomposes into per-word probabilities using the chain rule. However, in order to allow for more expressivity over long range context we do not require the discriminative model scores to factorize over the elements of y, thereby addressing a key limitation of RNNs. More specifically, we use an estimated score s′k(x,y1:i) that can be computed for any prefix of y = y1:n to approximate the objective during beam search, such that s′k(x,y1:n) = sk(x,y). The scoring functions are trained on prefixes of y (except where stated otherwise) to simulate their application to rank partial continuations during beam search.
3.1 BASE LANGUAGE MODEL
We train an RNN language model to estimate logPlm(s) = ∑ i logPlm(si|s1:i−1). (2)
The language model treats the context x and the continuation y as a single sequence s, in contrast to sequence to sequence models which distinguish between them.
3.2 COMPOSITE COMMUNICATION MODELS
Next we introduce a set of models motivated by Grice’s Maxims of communication. Each model is trained to discriminate between good and bad generation using a ranking objective, so that its model scores have the effect of re-ranking in the decoder. We vary the model parameterization and training examples to guide each model to focus on different aspects of Grice’s Maxims. The classifier scores are interpreted as classification probabilities and added to the objective function as log probabilities.
Let D = {(x1,y1), . . . (xn,yn)} be the set of training examples for conditional generation. Dx denote all contexts and Dy all continuations.
In all models the first layer embeds words into 300-dimensional vectors initialized with GloVe (Pennington et al., 2014) embeddings, which are fine-tuned during training unless stated otherwise. The dimensions of hidden layers are also 300. Let e(w) be the word embedding of word w.
3.2.1 LENGTH MODEL
RNNs tend to bias toward shorter generation even with a highly expressive network structure with attention, thus length normalization is still a common practice (Wu et al., 2016). We use a geometrically decaying length reward. We tune the initial value and decaying rate on the final systems, and use the same value for all systems with the initial value and decay rate being 1 and 0.9 respectively. The score per a time step (for a partially generated completion) is thus
slen(y1:i) = 0.9 i. (3)
3.2.2 REPETITION MODEL
The goal of this model is to learn to distinguish between RNN-generated and gold continuations by exploiting our empirical observation that repetitions are more common in completions generated by the RNN language model. We quantify the claim that samples from the base RNN is an appropriate source of negative examples for repetition detection on our datasets in §4.1. However, we do not want to completely discourage repetition, as words sometimes do recur in English. Thus we model natural levels of repetition as follows.
First a score di is computed for each token in the continuation y based on pairwise cosine similarity of word embeddings within a fixed window of the previous k words:
di = max j=i−k...i−1 (CosSim(e(yi), e(yj))). (4)
This score can be interpreted as the repetition “strength” of the sequence at position i, where di = 1 for all positions at which a word is repeated within k + 1 words of its previous use.
The score of the entire continuation is then defined as srep(y) = σ(w > r RNN(d)), (5)
where RNN(d) is the final state of a unidirectional RNN ran over the similarity scores and wr is a learned vector. The model is trained to maximize binary cross-entropy, with gold continuations as positive examples and samples from the base RNN as negative examples:
Lrep = ∑ y∈Dy log srep(y) + ∑ y′∼Plm(y′) log(1− srep(y′)). (6)
Word embeddings are kept fixed during training for this model.
3.2.3 ENTAILMENT MODEL
The goal of the entailment model is to guide the generator to neither contradict its own past generation (the maxim of Quality) nor state something that readily follows from the context (the maxim of Quantity). The latter case is driven by the RNNs habit of paraphrasing itself (alongside its tendency for more direct repetitions, which is captured by the repetition model). We train a classifier using the MultiSNLI dataset (Williams et al., 2017) that takes two sentences a and b as input and predicts the relation between them as either contradiction, entailment or neutral. We use the score of the neutral class probability of the sentence pair in order to discourage both contradiction and entailment.
We use a continuous bag-of-words model similar to previously reported NLI baselines (Mou et al., 2016). Sentence representations are obtained by summing word embeddings (which are fine-tuned during training), such that r(a) = ∑ i e(ai) and r(b) = ∑ i e(bi). An MLP classifier with a single tanh hidden layer takes as input concatenations of the two sentence embeddings, along with their element-wise difference and multiplication; the output is a three-way softmax. The log probability of the neural class is t(a,b). The classifier obtains 63% validation accuracy.
In contrast to our other communication models, this classifier cannot be applied directly to the full context and continuation sequences it is scoring. Instead every completed sentence in the continuation should be scored against all preceding sentences in both the context and continuation. Let S(y1:i) be the set of complete sentences in y1:i, and S−1(y1:i) the last complete sentence. We compute the entailment score of S−1(y1:i) against all preceding sentences in x and y, and use the score of the sentence-pair for which we have the least confidence that entailment is neutral:
sentail(x,y) = mina∈S(x)∪S:−1(y)t(a, S−1(y)). (7)
In contrast to our other models, the score this model returns for any given continuation only corresponds to a subsequence of the continuation, but as we will see below (§3.4) due to the way generation is performed with beam search this score will not be accumulated across sentences.
3.2.4 RELEVANCE MODEL
The purpose of the relevance model is to predict whether the content of a candidate continuation is relevant to the given context. We train the model to distinguish between true continuations and random continuations sampled from other (human-written) endings in the corpus, conditioned on the given context. A single convolutional layer is applied to both the context and candidate embedding sequences. The scoring function is defined as
srel = w T l · (maxpool(conv(e(x))) ◦maxpool(conv(e(y)))), (8)
where 1D maxpooling is performed over each sequence to obtain a vector representing its most important semantic dimensions. Element-wise multiplication of the context and continuation vectors will amplify similarities between the two.
We optimize the following ranking log likelihood:
Lrel = k∑ j=1 ∑ (x,yt)∈D,yr∼Dy log σ(srel(x,yt)− srel(x,yr)), (9)
i.e., the probability of the true ending yt receiving a higher score than the random ending yr. For each context and true continuation we extract k = 5 randomly selected endings as negative training examples. The model ranks true endings higher than randomly selected ones with 85% accuracy on the validation set. The trained scores do not correspond to probabilities but we scale with the logistic (sigmoid) function for compatibility with other scoring modules.
3.2.5 WORKING VOCABULARY MODEL
Given even a fragment of context (e.g. “We were excited about our beach getaway...”) certain topics become more relevant (e.g. “swimming”, “relax”, “romantic”). To capture this intuition, we predict a centroid in the embedding space from the context, which describes a neighborhood of the embedding space we expect the continuation to intersect. The score is computed as
svoc(x,y1:n) = RNN(y) ‖RNN(y)‖2 − 1 n n∑ i=1 e(yi) ‖e(yi)‖2 (10)
where RNN(x) is the final state of an RNN trained end-to-end with svoc using a discriminative loss:
Lvoc = ∑
(x,yt)∈D,yr∼Dy
log(2 + svoc(x,yt)− svoc(x,yr)). (11)
During decoding, this score is combined with the language model before sampling next word candidates, while the other models are used to rescore candidates (see §3.4). The reason for this is that this model improves diversity in the next word distribution, while encouraging continuation of topics in the context.
3.2.6 LEXICAL STYLE MODEL
The goal of the lexical style model is to learn stylistic aspects of desirable writing based on observed lexical distributions. The scoring function is defined as
sbow(y) = w T s maxpool(e(y)). (12)
The model is trained with a similar ranking criteria as the relevance model, except that the negative examples are sampled from the language model:
Lbow = ∑
yt∈Dy,ys∼Plm(ys)
log σ(sbow(yt)− sbow(ys)). (13)
3.3 MIXTURE WEIGHT LEARNING
Once all the communication models have been trained, we learn the combined objective. In particular we learn the weight coefficients λk in equation 1 to linearly combine the scoring functions, using a discriminative objective
Lmix = ∑
(x,y)∈D
log σ(f(x,y)− f(x,A(x))), (14)
where A is the inference algorithm for beam search decoding. The objective learns to rank the gold continuation above the continuation predicted by the current model. The full model is therefore trained discriminatively to classify sequences, and the same model is used to generate the sequences.
For each training iteration inference is performed based on the current values of λ. This has the effect that the objective function changes dynamically during training: As the current samples from the model are used to update the mixture weights, it creates its own learning signal by applying the generative model discriminatively.
Data: context y = y1 · · · yn, beam size k, mixture weights λ Result: best continuation best = None beam = [y] for step = 0; step< max steps; step = step +1 do
next beam = [] for candidate in beam do
expand(next beam, next k(candidate, λ)) if lookahead score(candidate.append(termination token))> best.score then
best = candidate.append(term) end
end for candidate in next beam do
candidate.score += fλ(candidate) . score with models end beam = sort(next beam, key=score)[:k] . select top k candidates
end if learning then
update lambda with gradient descent by comparing beam against the gold end return best
Algorithm 1: Inference/Learning in the Learning to Write Framework.
3.4 BEAM SEARCH
Due to the limitations of greedy decoding and the fact that our scoring functions do not decompose across time steps, we perform generation with a beam search procedure, shown in Algorithm 1. The naive approach would be to perform beam search based only on the language model, and then rescore the k best candidate completions with our full model. We found that this approach leads to limited diversity in the beam and therefore cannot exploit the strengths of the full model.
Therefore we rescore the current hypotheses in the beam with the full (partial) objective function and keep the k best candidate sequences after expanding to the next word. To further increase diversity we sample k candidate next words from the distribution when expanding a hypothesis, instead of obtaining the k highest scoring next words. In our experiments we generate text using a beam size of 8.
Two modules are handled differently during beam search: First, the working vocabulary score is integrated before sampling next word candidates, i.e. inside the next k call. Second, the entailment score of a candidate is recomputed only when sentence-terminating punctuation is generated. Otherwise the current entailment score is re-used. Due to the nature of beam-search the entailment score is not accumulated across sentences; rather we assume that the effect of the last sentence score will already be reflected in the content of the beam when the next sentence is scored. While the first sentence is generated, s′entail takes an initial constant value to avoid bias towards incomplete sentences.
4 EXPERIMENTS
4.1 EXPERIMENTAL SETUP
Corpora We use two corpora for evaluation. The first is TripAdvisor reviews.1 We use only reviews that have at least 5 sentences, using the first 4 as the context and the remainder as the ending to be generated. There are on average 11 sentences per review. We use the first 1M reviews for training. The second is the ROCStory corpus (Mostafazadeh et al., 2016), which is a collection of crowdsourced commonsense short stories, consisting of five sentences each (98k stories in the training set). We use this corpus to train our model to predict a coherent final sentence given the first four.
For the (larger) TripAdvisor corpus, we train the language model on that corpus alone. For the ROCStory corpus, we pretrain the model for 420M tokens on the Toronto Books corpus2 before training on the ROCStory text.
1TripAdvisor corpus: http://times.cs.uiuc.edu/˜wang296/Data/ 2Toronto Books corpus: http://yknzhu.wixsite.com/mbweb
Models We use the language model from Section 3.1 (a 2-layer RNN with 1024 GRU cells per layer) as our reference baseline. We include three variants of our model: L2W (NO META-LEARN) uses a uniform mixture of scores for the learned sub-objectives (no meta-weight learning); L2W (NO WK. VOCAB) omits the working vocabulary model (Section 3.2.5) but uses mixture weight learning for the other components; and L2W (FULL) is our full model.
To analyze the suitability of the reference endings and base language model samples to train the repetition model, we define a repetition ratio as the ratio of total words to unique words. We found that ROCStory endings generated by the language model have a higher repetition ratio than the reference endings (1.12 vs. 1.02). On TripAdvisor the difference is more pronounced, with a ratio of 1.48 for the language model against 1.17 for the references.
4.2 EVALUATION SETUP
Previous work has reported that automatic measures such as BLEU, ROUGE, and Meteor, do not lead to meaningful evaluation when used for long or creative text generation where there can be high variance among correct generation output (Wiseman et al., 2017b; Vedantam et al., 2015). For open-ended generation tasks such as our own, human evaluation is the only reliable measure (Li et al., 2016b; Wiseman et al., 2017b). However, we also report the automatic measures to echo the previous observation regarding the mismatch between automatic and human evaluation. Additionally we provide multiple generation samples that give insights into the characteristics of different models.
We pose the evaluation of our model as the task of generating an appropriate ending given an initial context of n sentences. For the purposes of automatic evaluation, the ending is compared against a gold reference ending that was written by a human (drawn from the original corpus). For human evaluation, the ending is presented to a human, who assesses the text according to several criteria, which are closely inspired by Grice’s Maxims:
1. Quantity: Does the generation avoid repetition? 2. Quality: Does the generation contradict itself (or the initial context)? 3. Relation: Does the generation appropriately continue the initial context? 4. Manner: Does the generation use the same style as the initial context?
Finally, a Turing test question is presented to each judge: was the ending written by a human?
For both automatic and human evaluation, we select 1000 passages from the test set of each corpus and generate endings using all models, using the initial n = 4 sentences as context. Automatic
evaluation scores each generated ending against its reference ending. To conduct human evaluation, we present the initial context with the generation from each of the models individually to workers on Mechanical Turk, who score them using the above criteria. As a control, we also ask workers to score the original (human-written) reference endings for each selected passage.
5 RESULTS AND ANALYSIS
5.1 QUANTITATIVE ANALYSIS
Results for both automatic and human evaluation metrics are presented for both corpora in Table 2.
For the TripAdvisor corpus, the language model baseline does best on all of the automated metrics. However on all human metrics, the L2W (NO WK.VOCAB) model scores higher, and achieves nearly twice the percentage of passed Turing tests (75% compared to 39%). It scores even better than the L2W (FULL) model. Manual inspection reveals that for the TripAdvisor task, the topical focus and lexical specificity given by vocabulary scoring actually lead to slightly worse model behavior. In TripAdvisor reviews, many possible sentences could logically follow a reviews context. A well-behaving but generic language generator is better able to leave judge’s unsurprised, if also uninformed. The full model brings additional focus to the generations, but at the cost of a reduction in overall coherence.
In the ROCStory generation task the language model is more competitive with L2W (FULL). The raw language model is able to take advantage of the simpler data inherent in this task: ROCStories are shorter (the endings are always a single sentence) and have a smaller vocabulary. However, the L2W full model still performs the best on all human metrics. The ROCStory corpus presents specific contexts that require a narrower set of responses to maintain coherency. The L2W (FULL) takes advantage of reranking by matching the topic of the context, yielding a higher ‘Relation’ score and more Turing test passes overall.
The very low BLEU scores observed in our results in the TripAdvisor domain are an artifact of the BLEU metrics length penalty. The average length of reference completions is 12 sentences, which is much longer than the average length of endings generated by our Learning to Write models. This forces the BLEU score’s length penalty to drive down the scores, despite our observation that there is still a significant amount of word and phrase overlap. The completions generated by the base language model are longer on average (as it tends to repeat itself over and over) and therefore to not suffer from this problem.
5.2 QUALITATIVE ANALYSIS
Learning to Write (L2W) generations are more topical and coherently mold with the context. Table 1 shows that both the L2W system as well as the classic RNN start similarly, commenting on the hotel
staff and the room. L2W is able to condense the same content into fewer words, following the context’s style. Beginning with the third sentence, L2W and the LM diverge more significantly. The LM makes a generic comment: “The location is perfect,” while L2W goes on to list specific items it enjoyed at breakfast, which was mentioned in the context. Furthermore, the LM becomes “stuck” repeating itself—a common problem for RNNs in general—whereas the L2W system finds a natural ending.
The L2W models do not fix every degenerate characteristic of RNNs. The TripAdvisor L2W generation in Table 1, while coherent and diverse, leaves room for improvement. The need to relate to topical phrases can override fluency, as with the long list of “food, fresh fruit, juices, coffee, tea, hot chocolate.” Furthermore phrases such as “friendly and helpful” and “clean and comfortable” occur in the majority of generations. These phrases, while common, tend to have semantics that are expressed in many different surface forms in real writing (e.g., “the staff were kind and quick to respond”). Appendix A presents sample generations from all models on both corpora for further inspection.
6 RELATED WORK
Generation with Long-term Context While relatively less studied, there have been several attempts at RNN-based paragraph generation for multiple domains including image captions (Krause et al., 2017), product reviews (Lipton et al., 2015; Dong et al., 2017), sport reports (Wiseman et al., 2017a), and recipes (Kiddon et al., 2016). Most of these approaches are based on sequence-tosequence (seq-to-seq) type architectures. One effective way to avoid the degenerative characteristics of RNNsunder seq-to-seq is to augment the input with sufficient details and structure that can guide the output generation through attention and a copying mechanism. However, for many NLP generation tasks, it is not easy to obtain such the large-scale training data required to support a robust sequence-to-sequence model. For example, a recent dataset for image-to-paragraph generation contains 14,575 training pairs Krause et al. (2017), and state-of-the-art hierarchical models, while improving over strong baselines, still lead to generation that repeats, contradicts, and is generic.
Alternative Decoding Objectives The tendency of RNN generation to be short, blend, repetitive and contradictory has been noted multiple times in prior literature. A number of papers propose approaches that can be categorized as alternative decoding objectives (Shao et al., 2017). For example, Li et al. (2016a) propose a diversity-promoting objective that interpolates the conditional probability score with negative marginal or reverse conditional probabilities. Unlike the negative marginal term that blindly penalizes generic generation, all our communication models are contextual in that they compare the context with continuation candidates. Incorporating the reverse conditional probability has been also proposed through the noisy channel models of Yu et al. (2017). The clear benefit of the reverse conditional probability model is that it can prevent the explaining away problem of the long-term context. However, decoding with the reverse conditional probability is significantly more expensive, making it impractical for paragraph generation. In addition, both above approaches do not allow for learning more expressive models than our work presents. The communication models presented in our work can be easily integrated into the existing beam search procedure, and each model is lightweight to compute.
The modified decoding objective has long been a common practice in statistical machine translation literature (Koehn et al., 2003; Och, 2003; Watanabe et al., 2007; Chiang et al., 2009). Notably, it still remains a common practice with neural machine translation, even with a highly expressive network structure trained on an extremely large amount of data (Wu et al., 2016). Inspired by all above approaches, our work presents a general learning framework together with a more comprehensive set of composite communication models. Unlike previous approaches, our composite models are trained to directly address the particular limitations of the base RNN models, by integrating RNN generation into the discriminative learning. This allows for customization of the decoding objective to better cope with the undesired behavior of the base language models.
Meta Learning There has been a broad range of literature on meta learning, where the learning goal broadens the search space of learning by targeting to learn model architectures, hyperparameters, and learning algorithms that are typically hand-designed or selected. Andrychowicz et al. (2016), for example, learn the learning rate, while Zoph & Le (2016) train a neural network to gen-
erate neural network architectures. while Snoek et al. (2012) proposed Bayesian optimization to learn hyperparameters that cannot be optimized with gradient descent. The learning framework we presented here can be interpreted as a type of meta learning in a broad sense in that there are multiple layers of learning: learning the base RNN language model, then learning a collection of composite communication models that are customized to the particular behavior of the base model, and finally the generator that learns to combine all sub-components.
7 CONCLUSION
Our work presents a unified learning framework that can learn to generate long, coherent text overcoming the limitations of RNNs as text generation models. Our framework learns a decoding objective suitable for generation through a combination of sub-models that capture linguisticallymotivated qualities of good writing. Our work makes a unique contribution that complements existing literature on long text generation that is predominantly based on seq-to-seq models with a large amount of in-domain training data; we demonstrate that the fluency of general RNN language models can be successfully guided to generate more lengthy and sensical text, which can be useful for domains where in-domain data is not sufficient to support seq-to-seq type training.
We propose a general framework for learning a decoding objective in two parts: learning component models to rank candidate generations, which are motivated by different aspects of Grice’s Maxims, and learning a weighing scheme that balances the influence of each of these scoring functions. This framework is amenable to any number and kind of sub-objectives, allowing for multiple qualities of good writing to be modeled across diverse domains. Human evaluation shows that the quality of the text produced by our model exceeds that of RNN baselines by a large margin and the generations score significantly higher on a Turing test evaluation.
A EXAMPLE GENERATIONS
ROCStory Context Gina was bored in detention. She decided to text her friend mary across the room.
She knew the mean teacher wouldn’t approve. She kept her phone in her purse to hide it from the teacher.
Language Model She was so excited that she couldn’t find her phone. No Meta-Learning Mary asked her mother if she could asked her if she was students. Teacher
Learn2Write Mary was relieved when her friend came over. Human Mary texted back.
Context Jack has a big garden. He weeds it every day. Once a week he takes veggies to the market. Jack waters his garden in the morning.
Language Model He was so excited that he couldnt wait to go back to the store. No Meta-Learning Trees grow trees trees and fruit trees. Vegetables
Learn2Write He loves his garden more than anything else. Human His veggies grow large and plentiful.
Context Samantha wanted to watch a scary movie, but her mother said no. Her mother left for work, and samantha decided to watch it anyway. The movie was so scary it gave samantha nightmares for weeks! She wished shed never watched it!
Language Model She was so excited that she couldnt sleep that night. No Meta-Learning She was so excited that she couldnt sleep that night.
Learn2Write She vowed never to watch scary movies again. Human Samantha learned her lesson.
Context My friend was moving away to college. We decided to throw her a party. We all met up with cake and snacks. She was so surprised to see us!
Language Model She was so excited that she couldn’t wait to go back. No Meta-Learning Birthday party dinner was celebration! Welcome
Learn2Write We all hugged her goodbye and went home. Human She hugged everyone and cried in joy.
TripAdvisor Context The hotel staff were very kind and helpful. Our room was large but a little sparse.
The garden is really nice in the courtyard. The location is perfect, next door to the station.
Language Model If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, this is not the place for you. But if you want to be in the middle of the action, this is not the place for you. If you want to be in the middle of the action, you
No Meta-Learning We didn’t eat at the hotel so can’t comment Learn2Write stay there again in a heartbeat. Catermeran is a great place to stay.
Human There are no stairs to the hotel but quite a few in the hotel. Restaurants right outside the door. The only problem would be for people who are not early risers, the church bells go early ( about 7am ) are loud and go on for a while ( a few minutes ). We used it as an alarm, which was fine for us but some may not like it. The breakfast was a little adverage but no un-eatable.
Context We spent 2 nights at hotel abbazia in late march with our teenage children. The location couldn’t be better. . . . So near to the train station and the bus station. We also loved the fact that we were away from the crowds around st. marks square ( nice place to visit, but i wouldn’t want to live there : ) the hotel provided a good breakfast and there were some great restaurants in the area. ” La zucca ” was our favorite but there was another that the hotel suggested ( right across and down from the hotel towards the canal. . . Can’t remember the name ) that was also very good.
Language Model The staff were very friendly and helpful. Breakfast was included in the price and was very good. Overall, i would recommend this hotel and would No Meta-Learning The staff was very friendly and helpful. The room was very clean and the bed was very comfortable. I would definitely stay here again and would recommend it to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to stay in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart of the city. I would recommend this hotel to anyone who wants to be in the heart
Learn2Write The room was clean and the bed was comfortable. Human We also loved the pizza at ” antico forno ” near the rialto bridge. The staff at
abbazia are fantastic. . . Very friendly and very helpful. The rooms are more than adequate and very clean. We had a family room that didn’t have a stand-up shower and that was probably the only minor negative i can think of ( a little tough for my 6’ 4 ” son to cram into the tub ) ps loved waking up to the church bells. . . . No alarm needed!
Context After reading multiple reviews for liki tiki village i was a bit nervous about staying there. We loved every minute. I wish that we had more time to enjoy the pools, but we were visiting the parks most of the time. The rooms were clean and the kitchen well stocked.
Language Model The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in the heart of the city. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. I would highly recommend this hotel to anyone who is looking for a place to stay in san diego. If you No Meta-Learning Our 6euros was multi-stop, vegatarian vegatarian vegatarian 6euros Learn2Write The rooms were clean and the kitchen well stocked. The staff were very friendly
and helpfu! I would definately recommend this hotel to anyone who is looking for a place to stay in the heart of san diego. Thank you to all of the staff for making our stay so enjoyable!
Human We did go to the time share presentation but we left almost immediately since we are owners elsewhere. I guess they figured out that they would be wasting their time. Great to have a washer and dryer. Also great to have a seperate room for the kids. The grounds were clean and we felt safe. Checkin was a little confusing since they gave us a map of the property and then wrote all over it in black pen so that you could not see the map. Overall it was a great experience. Close to the grocery store and to target. A little difficult to find but the gps got us there. Great for the price and amenities.
Context The location is amazingly convenient. Just 3min walk from the bus station or 1min walk from the train station and less than a 100m from the canal grande. I liked the fact that it’s at the start of the city so that you begin your venice experience slowly. I recommend walking to the san marco piazza on your first time ( make sure you wear comfortable shoes ).
Language Model The hotel is in a great location. The room was clean and comfortable. The staff was very friendly and helpful. The breakfast was very good. The staff was very friendly and helpful. I would recommend this hotel to anyone who is looking for a nice place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay in venice. I would recommend this hotel to anyone who is looking for a place to stay while in
No Meta-Learning The staff were very friendly and helpful and the breakfast buffet Learn2Write The rooms are spacious and clean. Breakfast is very good.
Human Then you exit the alley and face the magnificent san marco piazza and trust me, you will be pleasantly shocked. What an experience!!! : ) ) the abbazia rooms are decent and very clean. Breakfast is poor but adequate and wi-fi is free. The garden is very peaceful and offers some very relaxing moments. I was worried about noises from the train station next door but you can’t hear a thing so no problem there. The guys at the reception are amazing. Very friendly and very helpful : ) ) what you want from a hotel in venice is a decent place to sleep, have a relaxing bath and some breakfast in the morning. From then on you will be spending all your time in town anyway so fo me the abbazia hotel was an excellent choice and i will go back for sure. Price is not cheap, but nothing is cheap in venice anyway.
B MODEL DETAILS AND HYPERPARAMETERS
Base language model We use a 2-layer RNN language model with 1024 GRU cells per layer. Embeddings vectors are of length 1024. Following Inan et al. (2017) we tie the input and output embedding layers’ parameters. To regularize we dropout (Srivastava et al., 2014) cells in the output layer of the first layer with probability 0.2. We use mini-batch stochastic gradient descent (SGD) and anneal the learning rate regularly when the validation set performance fails to improve. Learning rate, annealing rate, and batch size were tuned on the validation set for each dataset, with details in section ??. Gradients are backpropagated 35 time steps and clipped to a maximum value of 0.25.
Entailment model Dropout is performed on both the input word embeddings and the MLP hidden layer with rate 0.5. Training is performed with Adam (Kingma & Ba, 2015) with learning rate 0.0005, batch size 128.
Relevance model The convolutional layer is a 1D convolution with filter size 3 and stride 1; the sequences are padded so that the sequence output is the same length as the input. The model is trained with Adam, learning rate 0.001 and dropout is applied before the final linear layer with rate 0.5.
Meta-weight learning Training is performed with SGD with a learning rate of 1. | 1. What is the main contribution of the paper, and how does it relate to Grice's maxims of communication?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to implement the Gricean maxims effectively?
3. How does the reviewer assess the quality and consistency of the experimental results presented in the paper?
4. Do you have any concerns or suggestions regarding the implementation of the proposed approach, such as the choice of objectives or the use of pre-trained models?
5. Are there any potential alternative approaches that could have been explored, such as using re-rankers to score the n-best outputs of the decoder? | Review | Review
This paper proposes to improve RNN language model generation using augmented objectives inspired by Grice's maxims of communication. The idea is to combine the standard word-by-word decoding objective with additional objectives that reward sentences following these maxims. The proposed decoding objective is not new; reseachers in machine translation
have worked on it referring to it as loss-augmented decoding: http://www.cs.cmu.edu/~nasmith/papers/gimpel+smith.naacl12.pdf
The use of RNNs in this context might be novel though.
Pros:
- Well-motivated and ambitious goals
- Human evaluation conducted on the outputs.
Cons:
- My main concern is that it is unclear whether the models introduced are indeed implementing the Gricean maxims. For eaxample, the repetition model would not only discourage the same word occurring twice, but also a similar word (according to the word vectors used) to follow another one.
- Similary, for the entailment model, what is an "obvious" entailment"? Not sure we have training data for this in particular. Also, entailment suggests textual cohesion, which is conducive to the relation maxim. If this kind of model is what we need, why not take a state-of-the-art model?
- The results seem to be inconsistent. The working vocabulary doesn't help in the tripAdvior experiment, while the RNN seems to work very well on the ROCstory data. While there might be good reasons for these, the point for me is that we cannot trust that the models added to the objective do what they are supposed to do.
- Are the negative examples generated for the repetition model checked that they contain repetitions? Shouldn't be difficult to do.
- Would be better to give the formula for the length model, the description is intuition but it is difficult to know exactly what the objective is
- In algorithm 1, it seems like we fix in advance the max length of the sentence (max-step). Is this the case? If so why? Also, the proposed learning algorithm only learns how to mix pre-trained models, not sure I agree they learn the objective. It is more of an ensembling.
- As far as I can tell these ideas could have been more simply implemented by training a re-ranker to score the n-best outputs of the decoder. Why not try it? They are very popular in text generation tasks. |
ICLR | Title
Generating Diverse Cooperative Agents by Learning Incompatible Policies
Abstract
Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task’s goal, the diversified agents are not guided to find other important, albeit sub-optimal, solutions: the agents might learn only variations of the same solution. In this work, we propose to learn diverse behaviors via policy compatibility. Conceptually, policy compatibility measures whether policies of interest can coordinate effectively. We theoretically show that incompatible policies are not similar. Thus, policy compatibility—which has been used exclusively as a measure of robustness—can be used as a proxy for learning diverse behaviors. Then, we incorporate the proposed objective into a population-based training scheme to allow concurrent training of multiple agents. Additionally, we use state-action information to induce local variations of each policy. Empirically, the proposed method consistently discovers more solutions than baseline methods across various multi-goal cooperative environments. Finally, in multi-recipe Overcooked, we show that our method produces populations of behaviorally diverse agents, which enables generalist agents trained with such a population to be more robust.
1 INTRODUCTION
Cooperating with unseen agents (e.g., humans) in multi-agent systems is a challenging problem. Current state-of-the-art cooperative multi-agent reinforcement learning (MARL) techniques can produce highly competent agents in cooperative environments (Kuba et al., 2021; Yu et al., 2021). However, those agents are often overfitted to their training partners and cannot coordinate with unseen agents effectively (Carroll et al., 2019; Bard et al., 2020; Hu et al., 2020; Mahajan et al., 2022).
The problem of working with unseen partners, i.e., ad-hoc teamwork problem (Stone et al., 2010), has been tackled in many different ways (Albrecht & Stone, 2018; Carroll et al., 2019; Shih et al., 2020; Gu et al., 2021; Rahman et al., 2021; Zintgraf et al., 2021; He et al., 2022; Mirsky et al., 2022; Parekh et al., 2022). These methods allow an agent to learn how to coordinate with unseen agents and, sometimes, humans. However, the success of these methods depends on the quality of training partners; it has been shown that the diversity of training partners is crucial to the generalization of the agent (Charakorn et al., 2021; Knott et al., 2021; Strouse et al., 2021; McKee et al., 2022; Muglich et al., 2022). In spite of its importance, obtaining a diverse set of partners is still an open problem.
The simplest way to generate training partners is to use hand-crafted policies (Ghosh et al., 2020; Xie et al., 2021; Wang et al., 2022), domain-specific reward shaping (Leibo et al., 2021; Tang et al., 2021; Yu et al., 2023), or multiple runs of the self-play training process (Grover et al., 2018; Strouse et al., 2021). These methods, however, are not scalable nor guaranteed to produce diverse behaviors. Prior works propose techniques aiming to generate diverse agents by changing the state visitation and action distributions (Lucas & Allen, 2022), or joint trajectory distribution of the agents (Mahajan et al., 2019; Lupu et al., 2021). However, as discussed by Lupu et al. (2021), there is a potential drawback of using such information from trajectories to diversify the behaviors. Specifically, agents that make locally different decisions do not necessarily exhibit different high-level behaviors.
To avoid this potential pitfall, we propose an alternative approach for learning diverse behaviors using information about the task’s objective via the expected return. In contrast to previous works that
use joint trajectory distribution to represent behavior, we use policy compatibility instead. Because cooperative environments commonly require all agents to coordinate on the same solution, if the agents have learned different solutions, they cannot coordinate effectively and, thus, are incompatible. Consequently, if an agent discovers a solution that is incompatible with all other agents in a population, then the solution must be unique relative to the population. Based on this reasoning, we introduce a simple but effective training objective that regularizes agents in a population to find solutions that are compatible with their partner agents but incompatible with others in the population. We call this method “Learning Incompatible Policies” (LIPO).
We theoretically show that optimizing the proposed objective will yield a distinct policy. Then, we extend the objective to a population-based training scheme that allows concurrent training of multiple policies. Additionally, we utilize a mutual information (MI) objective to diversify local behaviors of each policy. Empirically, without using any domain knowledge, LIPO can discover more solutions than previous methods under various multi-goal settings. To further study the effectiveness of LIPO in a complex environment, we present a multi-recipe variant of Overcooked and show that LIPO produces behaviorally diverse agents that prefer to complete different cooking recipes. Experimental results across three environments suggest that LIPO is robust to the state and action spaces, the reward structure, and the number of possible solutions. Finally, we find that training generalist agents with a diverse population produced by LIPO yields more robust agents than training with a less diverse baseline population. See our project page at https://bit.ly/marl-lipo
2 PRELIMINARIES
Our main focus lies in fully cooperative environments modeled as decentralized partially observable Markov decision processes (Dec-POMDP, Bernstein et al. (2002)). In this work, we start our investigation in the two-player variant. A two-player Dec-POMDP is defined by a tuple (S,A1,A2,Ω1,Ω2, T,O, r, γ,H), where S is the state space, A ≡ A1 ×A2 and Ω ≡ Ω1 × Ω2 are the joint-action and joint-observation spaces of player 1 and player 2. The transition probability from state s to s′ after taking a joint action (a1, a2) is given by T (s′|s, a1, a2). O(o1, o2|s) is the conditional probability of observing a joint observation (o1, o2) under state s. All players share a common reward function r(s, a1, a2), γ is the reward discount factor and H is the horizon length.
Players, with potentially different observation and action spaces, are controlled by policy π1 and π2. At each timestep t, the players observe ot = (o1t , o 2 t ) ∼ O(o1t , o2t |st) under state st ∈ S and produce a joint action at = (a1t , a 2 t ) ∈ A sampled from the joint policy π(at|τt) = π1(a1t |τ1t )π2(a2t |τ2t ), where τ1t and τ 2 t contain a trajectory history until timestep t from the perspective of each agent. All players receive a shared reward rt = r(st, a1t , a 2 t ). The return of a joint trajectory
τ = (o0, a0, r0, ..., rH−1, oH) ∈ T ≡ (Ω × A × R)H can be written as G(τ) = ∑H t=0 γ trt. The expected return of a joint policy (π1, π2) is J(π1, π2) = Eτ∼ρ(π1,π2)G(τ), where ρ(π1, π2) is the distribution over trajectories of the joint policy (π1, π2) and P (τ |π1, π2) is the probability of τ being sampled from a joint policy (π1, π2).
We use subscripts to denote different joint policies and superscripts to refer to different player roles. For example, πA = (π1A, π 2 A) is a different joint policy from πB = (π 1 B , π 2 B), and π i A and πjA are policies of different roles. 1 Finally, we denote the expected joint return of self-play (SP) trajectories—where both policies are part of the same joint policy, πA—as JSP(πA) := J(π1A, π 2 A) and the expected joint return of cross-play (XP) trajectories—where policies are chosen from different joint policies, πA and πB—as JXP(πA, πB) := J(π1A, π 2 B) + J(π 1 B , π 2 A).
Since we are interested in creating distinct policies for any Dec-POMDP, we need an environmentagnostic measure that captures the similarity of policies. First, we consider a measure that can compute the similarity between policies of the same role i, e.g., πiA and π i B . We can measure this with the probability of a joint trajectory τ produced by either πiA or π i B . However, in the two-player setting, we need to pair these policies with a reference policy πjref. Specifically, π i A and π i B are considered similar if they are likely to produce the same trajectories when paired with an arbitrary reference policy πjref. We define similar policies as follows:
1Note that LIPO can be applied to environments with more than two players with a slight modification. Specifically, a policy πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ k(akt |τkt ).
Definition 2.1 (Similar policies). Considering two policies of the same role i, πiA and πiB , and a reference policy πjref of a different role j, π i A is similar to π i B up to ϵ if and only if maxτ∈T |1 − P (τ |πiA,π j ref) P (τ |πiB ,π j ref) | ≤ ϵ, where 0 ≤ ϵ ≤ 1.
Next, we consider an alternate view on assessing the similarity between policies using policy compatibility (Section 3). Policy compatibility measures the performance difference of a joint policy πB before and after one of its policies πiB is substituted by another policy π i A. We define compatibility between a policy πiA and a joint policy πB as follows:
Definition 2.2 (Compatible policies). Given a policy πiA and a joint policy πB , πiA is compatible with πB if and only if J(πiA, π j B) ≥ (1− ϵ)JSP(πB).
3 LEARNING INCOMPATIBLE POLICIES (LIPO)
Our goal is to create distinct policies and, therefore, a population of diverse agents. First, we theoretically show that policy compatibility can be used to identify whether two policies are different. Based on this observation, we propose a novel training objective that produces a distinct policy. Then, we extend this objective for training a population of diverse policies. Finally, we incorporate an MI objective that encourages each policy to learn local variations.
3.1 LEARNING A DISTINCT POLICY VIA POLICY COMPATIBILITY
In this section, we motivate our objective by looking at two joint policies: πA = (π1A, π 2 A) and πB = (π1B , π 2 B). The goal is for πA to learn a different behavior from πB via the compatibility criterion. Importantly, the compatibility criterion can be computed empirically without direct access to the trajectory distribution, which can be difficult to estimate. Under mild assumptions, we can simplify the setting such that a simple relationship between similarity measure and compatibility criterion emerges. By reasoning about the expected return under different pairs of policies, we derive our main result.
Theorem 3.1. If πiA is similar to πiB , then πiA is compatible with πB . (The proof is in App. A.) Corollary 3.2. If πiA is not compatible with πB , then πiA is not similar to πiB .
The result from Corollary 3.2 shows that we can find a policy πiA that is not similar to π i B by decreasing its compatibility with πB until they are incompatible, i.e., J(πiA, π j B) < (1− ϵ)JSP(πB). Additionally, we can ensure that πA learns a meaningful solution by maximizing JSP(πA). Assuming that πB has learned a solution and is fixed, the optimization objective of πA can be written as
max πA
JSP(πA) subject to J(πiA, π j B) < (1− ϵ)JSP(πB) ∀i, j ∈ {1, 2}, i ̸= j (1)
A way to solve such a constrained problem is to convert the constraints into regularization terms. For simplicity, we use a common λXP > 0 as a hyperparameter for the constraints. Then, we can write the soft objective of Eq. 1 as
max πA
JSP(πA)− λXPJXP(πA, πB) (2)
3.2 LEARNING A POPULATION OF DIVERSE POLICIES
To create a population of N diverse policies, P = {πA|1 ≤ A ≤ N}, we need an objective that requires each member of the population to have a different behavior relative to the rest of the population. We can write such an objective by expanding the XP term in Eq. (2) to include all other policies in the population. Additionally, we relax the assumption that other policies are fixed to
allow concurrent training of all policies. For a policy πA ∈ P , with an aggregation function fagg, its objective becomes
max πA
JLIPO(πA,P) = JSP(πA)− λXPJ̃XP(πA,P), (3)
where J̃XP(πA,P) = fagg(BxpA ), (4) BxpA = {JXP(πA, πB) | πB ∈ P−A}, (5) P−A = P\{πA} (6)
While using the average operation as the aggregation function is plausible, we find that using the max operation helps stabilize the training process and produces more diverse policies. We suspect that the average operation might produce many conflicting gradients and does not prioritize compatible XP pairs. We refer to JLIPO as the compatibility gap between a policy πA and a population P .
We can see that the compatibility gap objective only uses the expected return (JSP and J̃XP) and is insensitive to the state and action information. We argue that this distinction between LIPO and previous methods helps the agents discover more solutions in various situations (Sec. 4.1 and 4.5).
3.3 INDUCING VARIATIONS IN EACH POLICY
It is important to note that, regardless of the population size, there could be policies of role i that are compatible with πA ∈ P but not similar to πiA. We consider those policies to be variations of πiA and propose to capture such variations via an MI objective. Specifically, we condition π i A on a latent variable zi such that πA has the form of πA(a|τ) = E(z1,z2)π1A(a1|τ1, z1)π2A(a2|τ2, z2) where p(z1, z2) is a pre-defined prior distribution. We can induce variations of πiA by maximizing I({oi, ai}; zi), where I(·; ·) is the MI between two random variables. Intuitively, this objective encourages each policy to observe different observations and perform different actions given different values of the latent variable. However, maximizing I({oi, ai}; zi) directly is intractable, instead we optimize the variational lower bound of the MI (Jordan et al., 1999) (see App. B for the derivation)
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|oi, ai)], (7) where qϕA(z
i|oi, ai) is an approximation of the true posterior p(zi|oi, ai) parameterized by ϕA. So, maximizing I({o1, a1}; z1) and I({o2, a2}; z2) is an optimization problem that can be written as
max πA,ϕA
1
2 2∑ i=1 H(zi) + Ezi,(oi,ai) log qϕA(zi|oi, ai) (8)
In the previous work (Mahajan et al., 2019), shared z (i.e., z1 = z2) is used allowing both policies to collectively switch between different modes of behavior. However, LIPO uses independently sampled z as it utilizes z for a different purpose. Specifically, LIPO maximizes JLIPO to learn diverse solutions and optimizes the MI objective to learn variations of each solution. That is, the MI objective does not directly impact the diversity between different policies but increases variations of each individual policy. We note that the MI objective is optional; we show that without the MI objective, LIPO still produces diverse policies (Sec. 4.4).
3.4 IMPLEMENTATION
In practice, we modify the MI objective (Eq. 8) to be differentiable with respect to the policy πiA. Specifically, the variational posterior qϕA is modified such that, instead of a sampled action a
i, it takes the action distribution πiA(·|oi, zi) as an input, i.e., qϕA(z|o, πiA(·|oi, zi)). In contrast to previous MI-based approaches (Eysenbach et al., 2018; Sharma et al., 2019; Jiang & Lu, 2021; Lucas & Allen, 2022), we can optimize I({oi, ai}; zi) directly without computing an auxiliary reward (Mahajan et al., 2019; Osa et al., 2022). The loss function of the modified MI objective is
LMI(πA, ϕA) = − 1
2 2∑ i=1 Ezi,(oi,ai) log qϕA(zi|oi, πiA(·|oi, zi))) (9)
The objective of a policy πA in a population P becomes max πA,ϕA JLIPO(πA,P)− λMILMI(πA, ϕA) (10)
We set z as a discrete variable and use the uniform distribution for p(z1) and p(z2). At the beginning of each episode, each policy is given an independently sampled z that will be used until the end of the episode. We use MAPPO (Yu et al., 2021) for maximizing JSP and minimizing J̃XP. More details, including the pseudocode and the extension to more than two players, can be found in App. D.
4 EXPERIMENTS
We study the effectiveness of LIPO under three multi-goal cooperative environments in which both players must collectively choose to accomplish one of the available goals. We evaluate the diversity of a population based on the number of distinct goals achieved. We compare LIPO to other cooperative MARL methods that do not require domain knowledge to generate diverse agents. Our baselines are as follows: (i) Multi SP (multiple runs of self-play), (ii) SPMI (A single run of SP with added MI objective), (iii) MAVEN (Mahajan et al., 2019), and (iv) TrajeDi (Lupu et al., 2021). We also use Multi SPMI and Multi MAVEN as baselines by training SPMI and MAVEN multiple times. We also discuss on methods that utilize domain knowledge in Sec. 5.
4.1 DISCOVERING DIVERSE SOLUTIONS
We use two simple environments to study the effectiveness of various methods in discovering solutions: (i) One-Step Cooperative Matrix Game (CMG), in which there are many possible solutions, and (ii) Point Mass Rendezvous (PMR), a temporally extended cooperative navigation environment.
One-Step Cooperative Matrix Game (CMG): A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. By choosing the same solution, both players get a reward rm associated with the chosen solution. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H). For CMG-S, we set (M = 32, km = 8, rm = 0.5∗(1+ m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. An example payoff matrix is shown in Fig. 2a.
33300
33300
33300
00022
00022
0
0
0
0
0
000001
(a) (b) (c)
Figure 2: (a) The payoff matrix of a CMG game with (M = 3, km = m, rm = m). (b, c) The agents (orange) and landmark positions (blue) of PMR-C and PMR-L.
Point Mass Rendezvous (PMR): The environment is based on the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The goal of this environment is for the two agents to navigate to a landmark together. There are M = 4 landmarks, and we consider each landmark as a solution in this environment. This environment has two modes: PMR-C and PMR-L. In PMR-C, landmarks are distributed evenly on the circumference of a circle. Thus, all landmarks are equally easy to find and optimal. In PMR-L, landmarks are placed on a line. In this scenario, closer landmarks are easier to find. We define the population size |P| of each method as fol-
lows: For SPMI and MAVEN, |P| is equal to the number of dimensions of the latent variable, |z|. For Multi SPMI and Multi MAVEN, |P| = |z| · nseed where nseed is the number of random seeds and we use |z| = 8. For Multi SP, TrajeDi, and LIPO, |P| is the number of joint policies in the population. Results: Fig. 3 shows the numbers of learned solutions, averaged over three runs. In all environments, LIPO consistently discovers more solutions than the baselines, given the same population size. The baselines find fewer solutions in CMG-H and PMR-L than they do in CMG-S and PMR-C, whereas LIPO performs similarly across settings. LIPO is also better than the baselines at finding sub-optimal solutions in CMG-S. We note that Multi SP and TrajeDi perform almost ideally in PMR-C, where all solutions are equivalent, but perform worse in other settings. Also, Multi SPMI finds all four solutions in PMR when the population size is bigger than 8. However, it performs poorly in CMG. LIPO’s consistency across environments and settings demonstrates that LIPO is still effective when (i) many solutions exist, (ii) solutions are not equally optimal, and (iii) solutions are not equally likely to be found by random exploration. We also have experimented with stronger regularization coefficients for the baselines, which help the baselines discover more solutions. However, if the regularization coefficient is too large, they fail to produce capable policies.
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size D
is co
ve re
d s
o lu
ti o
ns
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
D is
co ve
re d
s o
lu ti
o ns
(a) CMG-S
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(b) CMG-H
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
l i I j i l i I l i
i
D is
co ve
re d
s o
lu ti
o ns
(c) PMR-C
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(d) PMR-L
Figure 3: Numbers of discovered solutions. Ideally, if the population size increases by one, one more solution should be discovered, as depicted by the dashed lines (assuming that a joint policy does not produce a multi-modal behavior).
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(a) PMR-C
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(b) PMR-L
Figure 4: Numbers of competent joint policies using various combinations of N (x-axis) and λXP (colors) in PMR.
4.2 TRADE-OFF BETWEEN COMPETENCY AND DISSIMILARITY OF JOINT POLICIES
It is possible that optimizing a regularized objective might incur training instability and create incapable policies. Here, we investigate the effect of different combinations of λXP and the population size (N ) on the competency of the policies.
Fig. 4 shows the number of competent joint policies when using different values of N and λXP in PMR. Particularly, in PMR, a joint policy is considered competent when both players stay close to a landmark at the end of an episode. We observe that when the population size is larger than the number of solutions (N > M ), some surplus policies do not learn to reach a goal. Importantly, the number of competent joint policies depends on the value of λXP: lower values of λXP yield more capable policies. However, using too low λXP will generate policies that share a common solution when N ≤ M as shown in App. J.1.1. Additionally, when N ≤ M , all trained agents are competent except when λXP is too high in PMR-L. These results suggest that there is a trade-off between the number of capable joint policies and policy dissimilarity. When using a larger population size, a small λXP should be used to avoid producing incompetent agents, while a bigger λXP should be used with smaller population sizes to ensure the dissimilarity between joint policies.
4.3 TRADE-OFF BETWEEN COMPUTATION COST AND DIVERSITY
Not only is using bigger values of N more likely to produce incompetent policies, but it is also computationally expensive. Formally, the computation complexity of approximating J̃XP(·,P) is O(Nnxp) where nxp is the number of XP pairs used to approximate J̃XP(πA,P). So, we investigate a way to reduce the cost of calculating J̃XP(πA,P) by reducing nxp. According to Eq. 5, the default value is nxp = N − 1. When nxp < N − 1, nxp policies are chosen randomly from P−A by sampling without replacement.
We observe that, while being computationally cheaper, using nxp < N − 1 tends to produce less diverse populations as shown in Fig. 5. Thus, nxp can be considered a hyperparameter
that controls the computation-diversity trade-off. However, as shown by the dashed lines, the effect of nxp on population diversity is less prominent in PMR-C, where solutions are equally likely to be found. We use nxp = N − 1 in all other experiments. See App. J.1.2 for results in CMG.
4.4 EFFECT OF THE MUTUAL INFORMATION OBJECTIVE
Fig. 6 shows the behaviors of the policies produced by LIPO with and without the MI objective in PMR-C. We can see the effect of the MI objective in the variety of the trajectories. Overall, each agent exhibits larger variations given a small MI regularization λMI = 0.5. This result aligns with our motivation of using the MI objective to learn variations of each solution. With or without the MI regularization, LIPO discovers all the landmarks with N = 4.
4.5 DISCOVERING RECIPES IN MULTI-RECIPE OVERCOOKED
Overcooked, a collaborative cooking game, has been used to study the cooperative ability of learned agents in prior works (Carroll et al., 2019; Charakorn et al., 2020; Strouse et al., 2021; McKee et al., 2022). To investigate the usefulness of LIPO in a high-dimensional environment, we implement a more complex version of the game based on the work of Wu et al. (2021); players have to complete and serve one of the six pre-defined recipes as fast as possible, as opposed to delivering a single menu item re-
peatedly. We emphasize that this environment is much more challenging than the ones in the previous experiments because of various aspects: First, it has a sparse reward signal. Second, there are multiple sub-tasks. Third, different recipes have different sub-tasks. Each of these characteristics of the environment complicates the process of finding diverse solutions. Furthermore, we note that recipes containing a carrot or a tomato are harder to complete than other recipes as they involve an additional coordination step. Particularly, carrot and tomato have to be sent over by the agent on the right, unlike lettuce and onion. Fig. 7 shows an overview of the game.
The goal in this experiment is to learn a population of behaviorally diverse agents. We choose to quantify the diversity of a population based on the entropy of its recipe distribution. For a population
P , we approximate the probability of recipe i being completed as P (recipei|P) ≈ ∑ πA mi(πA)∑
i ∑ πA mi(πA) ,
where mi(πA) denotes the frequency of recipe i under a joint policy πA. The recipe frequencies, {mi(πA)|1 ≤ i ≤ 6}, for each joint policy πA ∈ P are measured by counting the completed recipes from 1,000 self-play episodes. For Multi SP, TrajeDi and LIPO, we set N = 8. For Multi SPMI and Multi MAVEN, we use nseed = 8 and |z| = 8.
Results: Quantitatively, Tab. 1 shows that LIPO has the highest population recipe distribution entropy, averaged over five random seeds. This result indicates that LIPO populations use all recipes more uniformly than the baseline populations, even though some recipes take longer to complete or are harder to find by random exploration. The recipe distribution of populations produced by each method can be found in Fig. 8. We find that LIPO populations with λMI = 0 still, similar to λMI = 0.5, consistently learn to use the hard-to-find Tomato & Carrot Salad recipe. However, the frequencies of Chopped Tomato and Chopped Carrot are lowered (Fig. 8f). This means that there are multiple joint policies that learn to complete the same recipe while being incompatible with each other. We suspect that using λMI > 0 alleviates this problem by regularizing each joint policy to represent a policy with broader state-action coverage (e.g., learn multiple ways of completing a recipe), indirectly pushing other joint policies to use different recipes in order to be incompatible. Qualitatively, we can see in App. J.2 that the baselines produce agents with similar recipe frequencies. The resulting populations, thus, contain agents with a similar recipe preference. In contrast, LIPO produces agents with distinct recipe frequencies, collectively making the population more diverse than the baselines. We also visualize the behaviors learned by LIPO in App. J.3.
4.6 TRAINING GENERALIST AGENTS WITH GENERATED POPULATIONS
In addition to evaluating the diversity of agents in Overcooked, we quantify the usefulness of the produced agents by using them as training partners of a generalist agent and test the agent with held-out populations. Intuitively, more diverse training partners would enable the agent to generalize and coordinate with unseen agents better. Additionally, we include a population of six specialized SP policies where each policy is trained to complete a specific recipe by adjusting the reward function. The specialist population is created for evaluating the agent when the partner has a strong preference.
Fig. 9a shows that all generalist agents perform similarly when tested with held-out baseline populations. However, the agents trained with a baseline population perform poorly when matched with held-out LIPO and specialist populations. In contrast, those trained with a LIPO population perform better in both situations. Specifically, they have a significantly higher success rate when paired with the specialist with a strong preference for Tomato & Carrot Salad as shown in Fig. 9b. We attribute the success rate difference to the fact that this recipe has a lower completion probability in all except LIPO populations. As a result, generalist agents trained with a LIPO population perform better in terms of the overall success rate when tested with specialist agents. Overall, training with a LIPO population helps the generalist agents to better coordinate with more partner types as indicated by the harmonic means.
5 RELATED WORK
Learning a collection of diverse agents has been utilized in various contexts (Parker-Holder et al., 2020; Sun et al., 2020; Zahavy et al., 2021; Zhou et al., 2021). In the cooperative domain, Canaan et al. (2019; 2020) use the Quality Diversity (QD) algorithm (Mouret & Clune, 2015; Pugh et al., 2016) to produce a population of behaviorally diverse agents. QD, however, requires domain knowledge to encode different types of behaviors. For example, in CMG, the algorithm requires the mapping between actions and corresponding solutions. In Overcooked, it needs to know all possible recipes beforehand. Without such a domain knowledge, it would be difficult to use QD to produce a diverse population. TrajeDi (Lupu et al., 2021) produces a diverse population of agents based on the trajectory distribution. Finally, MEP (Zhao et al., 2021) trains a population of agents with an auxiliary reward based on population entropy. Like TrajeDi and MEP, ours does not require domain-specific knowledge. However, to promote behavioral diversity, LIPO utilizes the expected returns of different policy pairs as opposed to state-action information.
The idea of diversifying the empirical return has been explored in the context of finding diverse solutions in non-transitive competitive games (Liu et al., 2021; Balduzzi et al., 2019; Perez-Nieves et al., 2021). In particular, Liu et al. (2021) share some similar ideas with our work. They propose to use the expected returns, when encountering different opponents, and state-action information to promote diversity of agents. A concurrent work by Rahman et al. (2022) applies a similar idea of diversifying the expected joint return to generate diverse partners in cooperative settings. LIPO can be thought of as a special case designed specifically for cooperative environments (see App. E).
MI objectives have been used in RL to learn diverse behaviors (Eysenbach et al., 2018; Sharma et al., 2019; Kumar et al., 2020; Osa et al., 2022). In cooperative MARL, MAVEN (Mahajan et al., 2019) optimizes both RL and MI objectives to encourage the agents to explore in a committed manner and discover diverse solutions. Also, Any-play (Lucas & Allen, 2022) uses a similar objective to produce training partners with many solutions for a generalist agent. In contrast, our approach uses the MI objective to regularize each policy to learn local variations of each solution.
6 CONCLUSION
We propose LIPO, a simple and generic method that can create a population of diverse agents in cooperative multi-agent environments. Unlike previous work that uses state-action information from joint trajectories, LIPO utilizes the concept of policy compatibility to create diverse policies. This alternative view of quantifying diversity makes LIPO more robust to state and action spaces. Also, LIPO uses the MI objective to learn local variations of each solution. Empirically, LIPO consistently produces more diverse populations than the baselines across a variety of three multi-goal environments. Finally, in multi-recipe Overcooked, LIPO produces populations of diverse partners that help the generalist agents to generalize to unseen agents better. We include further discussions and limitations of LIPO in App. F and G.
ACKNOWLEDGEMENT
This work is partially supported by King Mongkut’s Institute of Technology Ladkrabang [2566-02- 06-002]. We thank Natchaya Sricom for drawing Fig. 1 and 7. We thank Supasorn Suwajanakorn, Sucha Supittayapornpong and Maytus Piriyajitakonkij for their suggestions on early draft versions. We also thank anonymous reviewers for their constructive feedbacks.
REPRODUCIBILITY STATEMENT
We have include additional information to reproduce the experimental results in the supplementary text:
• Environment details (App. C)
• Pseudocode and implementation details (App. D)
• Hyperparameters used in all experiments (App. H and I)
The source code is available at https://github.com/51616/marl-lipo.
A PROOF FOR THEOREM 3.1
We prove the relationship of similar policies and compatible policies in Theorem 3.1 under the following assumptions.
Assumption A.1 (All joint trajectories are supported by πB). P (τ |πiB , π j B) > 0; ∀τ ∈ T
Assumption A.2 (Shared ϵ). A common 0 ≤ ϵ ≤ 1 is used for Def. 2.1 and 2.2 Assumption A.3 (Positive return). G(τ) > 0;∀τ ∈ T Theorem. If πiA is similar to πiB , then πiA is compatible with πB .
Proof. Let r(τ) = P (τ |π i A,π j B)
P (τ |πiB ,π j B)
. Because πiA is similar to π i B , we know that 1− ϵ ≤ r(τ) ≤ 1+ ϵ; ∀τ
from Def. 2.1. Consequently, we can use importance sampling to write J(πiA, π j B) in relation to J(πiB , π j B):
J(πiA, π j B) = Eτ∼ρ(πiA,πjB)G(τ)
= ∫ τ P (τ |πiA, π j B)G(τ)
= ∫ τ P (τ |πiB , π j B) P (τ |πiB , π j B) P (τ |πiA, π j B)G(τ)
= ∫ τ r(τ)P (τ |πiB , π j B)G(τ) = Eτ∼ρ(πiB ,πjB)r(τ)G(τ)
Because 1 − ϵ ≤ r(τ) ≤ 1 + ϵ and G(τ) > 0;∀τ ∈ T , the expected return J(πiA, π j B) has the following upper and lower bounds:
(1− ϵ)Eτ∼ρ(πiB ,πjB)G(τ) ≤J(π i A, π j B) ≤ (1 + ϵ)Eτ∼ρ(πiB ,πjB)G(τ)
(1− ϵ)JSP(πB) ≤J(πiA, π j B) ≤ (1 + ϵ)JSP(πB)
This means that πiA is compatible with πB (Def. 2.2).
∴ If πiA is similar to π i B , then π i A is compatible with πB .
Remark: Assumption A.3 can be satisfied by offsetting the joint return of all trajectories such that minτ G(τ) > 0. However, since we use MAPPO as the base algorithm, the expected return is subtracted by a baseline to compute the advantage during the policy update, which removes the effect of the offset. In practice, even under environments that do not satisfy G(τ) > 0, LIPO can still discover diverse solutions effectively, as shown in the experiments.
B DERIVATION OF THE LOWER BOUND OF THE MI OBJECTIVE
We provide derivation of Eq. 7 here. Let p(zi|{oi, ai}) be the true posterior of zi and qϕ be the approximation of p parameterized by ϕ. The lower bound of I({oi, ai}|zi) can be derived as follows:
I({oi, ai}; zi) = H(zi)−H(zi|{oi, ai}) = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})− log qϕ(zi|{oi, ai}) + log qϕ(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log qϕ(zi|{oi, ai})] + KL(p(zi|{oi, zi})||qϕ(zi|{oi, zi}))
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|{oi, ai})]
C ADDITIONAL ENVIRONMENT DETAILS
C.1 ONE-STEP COOPERATIVE MATRIX GAME
A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. The game is stateless and terminate immediately after both players simultaneously choose an action. By choosing the same solution, both players get a reward rm associated with the chosen solution. This means that the solutions are not equally optimal if the values in {rm} are not identical. Similarly, if the values in {km} are not identical then the solutions are not equally likely to be chosen by a uniform joint policy. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H).
For CMG-S, we set (M = 32, km = 8, rm = 0.5 ∗ (1 + m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. There are 32 solutions, each with 8 compatible actions. There are 32× 8 = 256 possible actions for each player. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. There are 32 solutions, each with different number of compatible actions, ranging from 1 to 32. The number of available actions for each player is ∑m=32 m=1 m = 528.
C.2 POINT MASS RENDEZVOUS (PMR)
PMR is based on the the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The observation of each agent includes absolute position, current velocity, and the relative distance to the landmarks and the other agent. These features are concatenated as a 1-D vector of length 14. The possible actions are: no op, move, {up, down, left, right}. In PMR-C, the start positions of the agents are {(0.3,0), (-0.3,0)} and the landmarks positions are {(1.59, 1.59), (1.59, -1.59), (-1.59, 1.59), (-1.59, -1.59)}. For PMR-L, the start and the landmark positions are {(1,0),(0,1)} and {(0,2.25),(0,0.75),(0,-0.75),(0,-2.25)}. An episode will be terminated after 50 timesteps. The agents are incentivized to go to the same landmark and stay close together with the reward function
rt = 1− d(pi, c)−min l∈L d(l, c),
where d(·, ·) is the euclidean distance between two points, pi is the 2-d coordinate of agent i, c is the average coordinate of all agents, and L is the set of all landmarks.
C.3 MULTI-RECIPE OVERCOOKED
We implement a multi-recipe of the game based on the work of Wu et al. (2021). In this version of Overcooked, there are four ingredients: lettuce, onion, tomato, and carrot. The ingredients are randomly placed at pre-defined positions in the layout. Particularly, the lettuce and the onion are randomly placed on the left or the middle counter. The tomato and the carrot are randomly placed on the right or the middle counter. These ingredients can be composed into different recipes making each ingredient unique: four recipes (LettuceSalad, TomatoSalad, ChoppedCarrot, ChoppedOnion) require only a single ingredient, while the other two (TomatoLettuceSalad, TomatoCarrotSalad) require two ingredients. The ingredients have to be chopped at the chopping station before placing on the plate. After the required ingredients are put on the plate, they must be delivered to the delivery station.
Both players have the same egocentric observation and action spaces. The observation is a set of hand-crafted features that represent a local view of the environment. Specifically, we use the following features: absolute position and facing direction, relative distance to the objects and the other agent, state of the ingredients, four booleans indicating if the agent is next to a counter in four cardinal positions, currently held items, the state of the held foods, and the type and state of the items in front of the agent. These features are concatenated as a 1-D vector of length 54. At every timestep, each player has to choose one of the six possible actions: no op, move {up, down, left, right}, and interact. An episode lasts at most 200 timesteps and terminates immediately after a successful delivery. An episode without delivery is considered unsuccessful. We incentivize the agents to interact with the
objects and deliver as fast as possible with the following reward function:
rt = rinteract + rprogress + rcomplete − p,
here rinteract is a shaped reward given when an agent interacts with an object for the first time in an episode, rprogress is given when the players progress toward a recipe completion (i.e., chopping required ingredients or putting chopped ingredients on the plate), rcomplete is given upon successful delivery, and p is a penalty. We use rinteract = 0.5, rprogress = 1.0, rcomplete = 10, and p = 0.1. We note that recipes with more than one ingredient will give only slightly higher rewards (rinteract + rprogress) but are significantly harder to be discovered by random exploration than those with one ingredient.
Additional experimental details: For specialist agents, the rewards are given when interacting, progressing, or completing a specific recipe. For the held-out populations, we remove incompetent policies with the expected return of less than zero from the test populations created by TrajeDi and LIPO. We do not remove those in the training populations. We do this because testing with an incompetent policy does not give any meaningful information, as almost all episodes will be unsuccessful.
D IMPLEMENTATION DETAILS
Algorithm 1: Training process of LIPO (on-policy) This pseudocode is based on self-play. Blue text is related to the MI objective. LIPO specific code is highlighted in green. Input: A Population P = {πA |1 ≤ A ≤ N}, the number of XP pairs used to approximate J̃xp(πA,P)
(nxp), the number of players in an episode (m), and the number of SP and XP episodes per iteration (ESP and EXP).
while not done do for A ∈ {1, ..., N} do Bsp ← GetEpisodeRollouts(πA, πA, ESP,m) Compute JSP(πA) using Bsp Bxp ← GetCrossPlayRollouts(πA,P, nxp, EXP,m) Compute J̃XP(πA,P) using Bxp (Eq. 4) Compute LMI using Bsp and Bxp (Eq. 9) θA ← θA −∇θA [−JSP+λXPJ̃XP+λMILMI] ϕA ← ϕA − λMI∇ϕALMI
Algorithm 2: Rollout collection functions Function GetEpisodeRollouts(πA, πB , E): B ← {} for i ∈ {1, ...,m} do
for episode ∈ {1, .., E m } do
z1, ..., zm ∼ p(z1, ..., zm) zj ← { zk }m k ̸=i πj(·|·, zj) = Πk ̸=iπk(·|·, zk) τ ∼ ρ(πiA(·|·, zi), πjB(·|·, z
j)) B ← B ∪ {τ}
return B Function GetCrossPlayRollouts(πA,P, nxp, EXP): Bxp ← {} P ′−A ← SampleWithoutReplacement(P−A, nxp) for πB ∈ P ′−A do B ← GetEpisodeRollouts(πA, πB , EXP|P′−A|) Bxp ← Bxp ∪ B
return Bxp
The pseudocode for LIPO is shown in Algorithm 1 and 2. If there are more than two players (m > 2), πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ
k(akt |τkt ). We note that scaling LIPO to more than two players does not increase the training time. It is the same as the two-player setting as long as the numbers of SP and XP episodes are the same. In practice,
however, more XP episodes might be needed to better estimate J̃XP. We use the parameter sharing technique for better sample efficiency and faster convergence (Tan, 1993; Foerster et al., 2018; Rashid et al., 2018). Assuming that a policy πiA is a neural network parameterized by θ i A, this means that for a joint policy (π1A, π 2 A), we have θ 1 A = θ 2 A. Still, π 1 A and π 2 A can behave differently as they observe different parts of the environment and have a different player indicator concatenated with their local observations.
All methods are implemented on top of MAPPO except MAVEN. The critic, policy, and discriminator are feed-forward neural networks with two hidden layers, each having 64 units. For a fair comparison, we use the same or more environment steps in the policy update of the baselines compared to LIPO. Common hyperparameters of methods based on MAPPO are shown in Table. 2.
D.1 MAPPO
MAPPO is the base MARL algorithm for all baselines except MAVEN. The policy parameters are shared among all policies. The critic takes a state of the environment and outputs an expected return of a given global state. The global state is provided by the environment and only used during training. For the complete training objectives of MAPPO, we refer the reader to Appendix A of Yu et al. (2021).
D.2 MULTI SP
A simple but effective way to produce diverse agents by training multiple SP agents with different neural network initializations and random seeds. Specifically, each run produces a joint policy πA that maximizes JSP(πA) using MAPPO.
D.3 SPMI
A single run of SP agent trained with added MI objective I(z|oi, ai). SPMI uses a shared z for both policies and considers each z as a different joint policy. We train SPMI using the same training procedure as LIPO by setting N = 1, λXP = 0 and z1 = z2. The discriminator takes a local
observation oi and action distribution πi(·|oi) as inputs and outputs the discrete probability of the latent variable. The latent variable of all policies is shared during an episode.
D.4 MAVEN
MAVEN (Mahajan et al., 2019) is explicitly designed for learning diverse solutions in cooperative multi-agent environments. A joint policy is represented as π(·|τ, z), and each mode of behavior is represented by the latent variable z. Similar to SPMI, MAVEN uses a shared z for all policies. We use the same network architecture presented in Mahajan et al. (2019) with recurrent neural networks. However, we do not use the hierarchical policy but sample z from the uniform distribution. The latent variable of all policies is shared.
D.5 MULTI SPMI AND MULTI MAVEN
A population containing joint policies from multiple runs of SPMI and MAVEN. Like Multi SP, each run has different neural network initializations and random seeds. The population size is |P| = nseed|z|, where nseed is the number of runs. Notably, this baseline uses the training data differently from the base algorithms. Instead of training a long single run, this approach allows the policy to ”restart” by using different initialization of neural networks. For example, training a single run with |z| = N,nseed = 1 might not discover as many solutions as training nseed runs with |z| = Nnseed even though the population size is the same. Empirically, we find that multiple shorter runs can find more solutions compared to a single long run of the corresponding algorithm. Thus, we omit the results of the base algorithms in multi-recipe Overcooked.
D.6 TRAJEDI
TrajedDi produces a population of diverse agents that also maximize the expected return in cooperative environments. The diversity measure of this method is based on the Jensen-Shannon divergence (JSD) between the trajectory distribution of each policy. Different from the original implementation, we remove the best-response (BR) policy from the population. Since the BR policy might work well with only a subset of solutions, removing BR potentially increase the number of variations in the population. Our modified loss is:
L = −[ N∑
A=1
(JSP(πA)) + αJSDγ(π1, ..., πN )], (11)
where JSD is the proposed diversity objective of TrajeDi, and α and γ are the hyperparameters of TrajeDi.
D.7 LIPO
LIPO uses the same implementation as SPMI except LIPO uses independent latent variable for each policy and λXP > 0. Additionally, LIPO uses extra critics for the XP trajectories. In total, LIPO has an SP critic V πAsp and N − 1 XP critics {V πA,πBxp | πB ∈ P−A}. In each training iteration, LIPO collects SP and XP trajectories of all policy combinations. MAPPO is used for both maximizing JSP and minimizing J̃XP. The critics are trained using SP trajectories and all of XP trajectories, while the policy is trained using SP trajectories and XP trajectories from the XP pair that has the highest joint return, max(Bxp).
D.8 GENERALIST AGENT
The policy and critic networks of a generalist agent use two 256-unit GRU layers (Cho et al., 2014) followed by a linear layer. The input also includes the reward and action of the previous timestep. We use MAPPO for training a generalist agent. We train both the policy and critic with the batch size of 320,000 timesteps using truncated backpropagation through time (BPTT). The samples are reused for 15 epochs. Each minibatch contains 1,600 sequences with a maximum length of 50 timesteps. We also use learning rate annealing, specifically the generalist agent starting from 0.005 to 0.003 with linear scheduling. Other hyperparameters are shared with other methods (Tab. 2). A training
partner for a generalist agent is sampled uniformly from the training population at the beginning of an episode.
E RELATIONSHIP WITH RAHMAN ET AL. (2022)
Rahman et al. (2022) propose to optimize the self-play returns while maximizing the diversity term Div(C), where C is a N ×N cross-play payoff matrix. Specifically, they propose to learn diverse policies via the following objective:
max C Tr(C) + Div(C) (12)
They propose to use Div(C) = Det(κ(C)) where κ(C) is an N × N matrix with κi,j(C) being similarity between policy πi and πj . The radial basis function (RBF) kernel of the empirical returns is used to measure the similarity between two policies:
κi,j(C) = exp(− ||Ci,· − Cj,·||2
σ2 ), (13)
where Ci,· is row ith of C. In other words, Ci,· is the vector containing the empirical return of πi when matched with other policies in the population. Intuitively, this objective diversifies the policies via the expected returns similar to LIPO. Using the same notation of the cross-play matrix, we can write the objective of training a LIPO population as:
max C Tr(C)− λXP ∑ i max 1≤j≤N
i ̸=j
Ci,j (14)
This objective wants the diagonal (JSP) to be maximized and the off-diagonal entries of the cross-play matrix (J̃XP) to be minimized. This is a special case of Eq. 12 where Div(C) is based on policy compatibility.
F DISCUSSIONS
Agents trained with LIPO are incentivized to act adversarially toward agents that behave differently from itself. This behavior might not be desirable for certain downstream tasks. For example, agents produced by LIPO might not be suitable for interacting with humans as they would refuse to conform with the user. However, as shown in Sec. 4.6, training a generalist agent with these agents would have the opposite effect: the generalist agent would try to comply with the current partner’s preference.
LIPO produces a population of near-optimal solutions, a generalist agent trained with a LIPO population might not coordinate well with significantly sub-optimal agents. In a prior work, Strouse et al. (2021) show that augmenting the training population with past checkpoints (FCP) helps the trained generalist agent to effectively coordinate with sub-optimal agents. Since LIPO and FCP are orthogonal, populations created by LIPO can be also augmented in the same way as FCP.
Previous works find incompatible policies to be undesirable since they are generally results of coordinated symmetry breaking (Bard et al., 2020; Hu et al., 2020; 2021); these policies perform poorly when interacting with unseen partners. However, we show that learning incompatible policies can be useful for generating behaviorally diverse agents in various scenarios. The produced agents can then be used as training partners for a generalist agent. Using LIPO in environments where many solutions are equivalent may produce such undesirable symmetry-breaking conventions. We believe that LIPO can be combined with other techniques, e.g., other-play (Hu et al., 2020) and equivariant coordinator (Muglich et al., 2022), to avoid learning arbitrary symmetry-breaking. We leave the study of the combination of LIPO and these techniques for future work. A concurrent work by Cui et al. (2023) proposes an extension of LIPO by combining insights from off-belief learning (Hu et al., 2021) to avoid ”sabotaging” behavior of LIPO agents.
G LIMITATIONS
LIPO requires an additional hyperparameter λXP. If λXP is too big, it is possible that the main RL objective, JSP, would be interfered which will result in an incompetent joint policy (Sec. 4.2). An
adaptive mechanism that selects a suitable value for λXP at different stages of training could help increase training stability.
Although LIPO can be fully parallelized, it requires more computation than the baselines to get an accurate approximation of J̃XP, which makes it harder to scale up to bigger population sizes. Instead of collecting all policy pairs, sampling a portion of policy pairs to approximate J̃XP could reduce computation cost and training time at a potential cost of diversity (Sec. 4.3). Instead of using a uniform sampling, a mechanism that selects the best pair to sample (e.g., bandit algorithm) might help mitigate the diversity loss from using lower nxp.
In this work, we only investigate the effectiveness of LIPO under a new set of cooperative environments with multiple discrete solutions. Investigating LIPO in other well-known environments where the solution space might be continuous, e.g., continuous control (Peng et al., 2021), or environments with only a single goal, e.g., SMAC (Samvelyan et al., 2019), might yield interesting results and pose different challenges. We leave this further investigation as our future work.
H HYPERPARAMETERS (CMG AND PMR)
We provide the searched values of each method in Tab. 3, 4, 5, 6, 7 and 8. The hyperparameters are searched individually for each population size. We use three random seeds for each set of hyperparameters. We do not use any validation method. Instead, we present the results using the best parameters in the main paper. For LIPO, we set λMI as 0.0 and 0.5 in CMG and PMR, respectively.
I HYPERPARAMETERS (OVERCOOKED)
For each method, we use the parameters that give the highest entropy to generate five populations for Sec. 4.5, and Sec. 4.6. The searched values of each method are:
• TrajeDi: We perform a grid search with following hyperparameters: α ∈ {5, 10} and γ ∈ {0, 0.5}. We use α = 5 and γ = 0.5 for the results in the paper.
• Multi SPMI: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• Multi MAVEN: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• LIPO: We perform a grid search with following hyperparameters: λXP ∈ {0.2, 0.3} and λMI ∈ {0.1, 0.5}. We use λXP = 0.3 and λMI = 0.5 for the results in the paper.
J ADDITIONAL RESULTS
J.1 ADDITIONAL ABLATION RESULTS
We provide additional results of numbers of learned solutions with varying N and λXP (Fig. 10), numbers of competent agents in CMG with varying N and λXP (Fig. 11), and numbers of learned solutions with varying nxp in CMG (Fig. 12) here. These results are consistent with the analysis presented in Sec. 4.2 and 4.3.
J.1.1 ADDITIONAL RESULTS WITH VARYING N AND λXP
J.1.2 ADDITIONAL RESULTS WITH VARYING nxp
J.2 RADAR PLOTS OF RECIPE FREQUENCIES
Fig. 13 shows recipe frequencies of the population with highest recipe entropy (out of five runs) from each method. The frequencies are calculated based on completed recipe from 1,000 self-play episodes of each agent as described in Sec. 4.5. Qualitatively, we can see that agents in a LIPO
population have more distinct recipe frequencies. That is, agents are different from each other in terms of recipe preference.
J.3 VISUALIZATION OF BEHAVIORS
We visualize the behaviors joint policies produced by LIPO in PMR and Overcooked at https: //sites.google.com/view/iclr-lipo-2023. Here, we show snapshots of four joint policies that have a distinct recipe preference in Overcooked. | 1. What is the focus and contribution of the paper on training procedures for cooperative games?
2. What are the strengths of the proposed approach, particularly in terms of generating diverse policies?
3. What are the weaknesses of the paper, especially regarding the choice of benchmarks and ablation experiments?
4. How can the results be extended to more than two players?
5. How can the key differences in the results be better visualized and distilled? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors develop a training procedure for two-player cooperative games that generate diverse policies. The idea is to generate policies that are incompatible when paired together and train agents with this loss function. This is tested in two toy environments and an Overcooked grid world. In all cases, LIPO finds diverse solutions (when possible) as well as diverse solutions.
Strengths And Weaknesses
The paper is very clearly written and well organized it is easy to read and follow the logic. The toy environments are highly diagnostic and give strong insights into why this algorithm outperformer baselines and other approaches. It was nice to see the algorithm tested both for its ability to generate diverse populations and show that those populations matter for behavior.
One key weakness is that the algorithms were all tested on new benchmarks created just for this manuscript. Since there are now many competing approaches to this kind of diversity generation, it would have been nice to see this algorithm outperform others on a task designed by others.
Further, I would like to see some ablation experiments to understand the role of the \lambda_MI in the Overcooked environment. It is further not clear in what contexts this term will be important.
Clarity, Quality, Novelty And Reproducibility
What would be needed to extend these results to more than two players?
The results shown in Figure 9 are interesting, but is there a way to distill the key differences? I think the color scheme is hurting, as it isn't clear what cells are supposed to be compared. |
ICLR | Title
Generating Diverse Cooperative Agents by Learning Incompatible Policies
Abstract
Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task’s goal, the diversified agents are not guided to find other important, albeit sub-optimal, solutions: the agents might learn only variations of the same solution. In this work, we propose to learn diverse behaviors via policy compatibility. Conceptually, policy compatibility measures whether policies of interest can coordinate effectively. We theoretically show that incompatible policies are not similar. Thus, policy compatibility—which has been used exclusively as a measure of robustness—can be used as a proxy for learning diverse behaviors. Then, we incorporate the proposed objective into a population-based training scheme to allow concurrent training of multiple agents. Additionally, we use state-action information to induce local variations of each policy. Empirically, the proposed method consistently discovers more solutions than baseline methods across various multi-goal cooperative environments. Finally, in multi-recipe Overcooked, we show that our method produces populations of behaviorally diverse agents, which enables generalist agents trained with such a population to be more robust.
1 INTRODUCTION
Cooperating with unseen agents (e.g., humans) in multi-agent systems is a challenging problem. Current state-of-the-art cooperative multi-agent reinforcement learning (MARL) techniques can produce highly competent agents in cooperative environments (Kuba et al., 2021; Yu et al., 2021). However, those agents are often overfitted to their training partners and cannot coordinate with unseen agents effectively (Carroll et al., 2019; Bard et al., 2020; Hu et al., 2020; Mahajan et al., 2022).
The problem of working with unseen partners, i.e., ad-hoc teamwork problem (Stone et al., 2010), has been tackled in many different ways (Albrecht & Stone, 2018; Carroll et al., 2019; Shih et al., 2020; Gu et al., 2021; Rahman et al., 2021; Zintgraf et al., 2021; He et al., 2022; Mirsky et al., 2022; Parekh et al., 2022). These methods allow an agent to learn how to coordinate with unseen agents and, sometimes, humans. However, the success of these methods depends on the quality of training partners; it has been shown that the diversity of training partners is crucial to the generalization of the agent (Charakorn et al., 2021; Knott et al., 2021; Strouse et al., 2021; McKee et al., 2022; Muglich et al., 2022). In spite of its importance, obtaining a diverse set of partners is still an open problem.
The simplest way to generate training partners is to use hand-crafted policies (Ghosh et al., 2020; Xie et al., 2021; Wang et al., 2022), domain-specific reward shaping (Leibo et al., 2021; Tang et al., 2021; Yu et al., 2023), or multiple runs of the self-play training process (Grover et al., 2018; Strouse et al., 2021). These methods, however, are not scalable nor guaranteed to produce diverse behaviors. Prior works propose techniques aiming to generate diverse agents by changing the state visitation and action distributions (Lucas & Allen, 2022), or joint trajectory distribution of the agents (Mahajan et al., 2019; Lupu et al., 2021). However, as discussed by Lupu et al. (2021), there is a potential drawback of using such information from trajectories to diversify the behaviors. Specifically, agents that make locally different decisions do not necessarily exhibit different high-level behaviors.
To avoid this potential pitfall, we propose an alternative approach for learning diverse behaviors using information about the task’s objective via the expected return. In contrast to previous works that
use joint trajectory distribution to represent behavior, we use policy compatibility instead. Because cooperative environments commonly require all agents to coordinate on the same solution, if the agents have learned different solutions, they cannot coordinate effectively and, thus, are incompatible. Consequently, if an agent discovers a solution that is incompatible with all other agents in a population, then the solution must be unique relative to the population. Based on this reasoning, we introduce a simple but effective training objective that regularizes agents in a population to find solutions that are compatible with their partner agents but incompatible with others in the population. We call this method “Learning Incompatible Policies” (LIPO).
We theoretically show that optimizing the proposed objective will yield a distinct policy. Then, we extend the objective to a population-based training scheme that allows concurrent training of multiple policies. Additionally, we utilize a mutual information (MI) objective to diversify local behaviors of each policy. Empirically, without using any domain knowledge, LIPO can discover more solutions than previous methods under various multi-goal settings. To further study the effectiveness of LIPO in a complex environment, we present a multi-recipe variant of Overcooked and show that LIPO produces behaviorally diverse agents that prefer to complete different cooking recipes. Experimental results across three environments suggest that LIPO is robust to the state and action spaces, the reward structure, and the number of possible solutions. Finally, we find that training generalist agents with a diverse population produced by LIPO yields more robust agents than training with a less diverse baseline population. See our project page at https://bit.ly/marl-lipo
2 PRELIMINARIES
Our main focus lies in fully cooperative environments modeled as decentralized partially observable Markov decision processes (Dec-POMDP, Bernstein et al. (2002)). In this work, we start our investigation in the two-player variant. A two-player Dec-POMDP is defined by a tuple (S,A1,A2,Ω1,Ω2, T,O, r, γ,H), where S is the state space, A ≡ A1 ×A2 and Ω ≡ Ω1 × Ω2 are the joint-action and joint-observation spaces of player 1 and player 2. The transition probability from state s to s′ after taking a joint action (a1, a2) is given by T (s′|s, a1, a2). O(o1, o2|s) is the conditional probability of observing a joint observation (o1, o2) under state s. All players share a common reward function r(s, a1, a2), γ is the reward discount factor and H is the horizon length.
Players, with potentially different observation and action spaces, are controlled by policy π1 and π2. At each timestep t, the players observe ot = (o1t , o 2 t ) ∼ O(o1t , o2t |st) under state st ∈ S and produce a joint action at = (a1t , a 2 t ) ∈ A sampled from the joint policy π(at|τt) = π1(a1t |τ1t )π2(a2t |τ2t ), where τ1t and τ 2 t contain a trajectory history until timestep t from the perspective of each agent. All players receive a shared reward rt = r(st, a1t , a 2 t ). The return of a joint trajectory
τ = (o0, a0, r0, ..., rH−1, oH) ∈ T ≡ (Ω × A × R)H can be written as G(τ) = ∑H t=0 γ trt. The expected return of a joint policy (π1, π2) is J(π1, π2) = Eτ∼ρ(π1,π2)G(τ), where ρ(π1, π2) is the distribution over trajectories of the joint policy (π1, π2) and P (τ |π1, π2) is the probability of τ being sampled from a joint policy (π1, π2).
We use subscripts to denote different joint policies and superscripts to refer to different player roles. For example, πA = (π1A, π 2 A) is a different joint policy from πB = (π 1 B , π 2 B), and π i A and πjA are policies of different roles. 1 Finally, we denote the expected joint return of self-play (SP) trajectories—where both policies are part of the same joint policy, πA—as JSP(πA) := J(π1A, π 2 A) and the expected joint return of cross-play (XP) trajectories—where policies are chosen from different joint policies, πA and πB—as JXP(πA, πB) := J(π1A, π 2 B) + J(π 1 B , π 2 A).
Since we are interested in creating distinct policies for any Dec-POMDP, we need an environmentagnostic measure that captures the similarity of policies. First, we consider a measure that can compute the similarity between policies of the same role i, e.g., πiA and π i B . We can measure this with the probability of a joint trajectory τ produced by either πiA or π i B . However, in the two-player setting, we need to pair these policies with a reference policy πjref. Specifically, π i A and π i B are considered similar if they are likely to produce the same trajectories when paired with an arbitrary reference policy πjref. We define similar policies as follows:
1Note that LIPO can be applied to environments with more than two players with a slight modification. Specifically, a policy πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ k(akt |τkt ).
Definition 2.1 (Similar policies). Considering two policies of the same role i, πiA and πiB , and a reference policy πjref of a different role j, π i A is similar to π i B up to ϵ if and only if maxτ∈T |1 − P (τ |πiA,π j ref) P (τ |πiB ,π j ref) | ≤ ϵ, where 0 ≤ ϵ ≤ 1.
Next, we consider an alternate view on assessing the similarity between policies using policy compatibility (Section 3). Policy compatibility measures the performance difference of a joint policy πB before and after one of its policies πiB is substituted by another policy π i A. We define compatibility between a policy πiA and a joint policy πB as follows:
Definition 2.2 (Compatible policies). Given a policy πiA and a joint policy πB , πiA is compatible with πB if and only if J(πiA, π j B) ≥ (1− ϵ)JSP(πB).
3 LEARNING INCOMPATIBLE POLICIES (LIPO)
Our goal is to create distinct policies and, therefore, a population of diverse agents. First, we theoretically show that policy compatibility can be used to identify whether two policies are different. Based on this observation, we propose a novel training objective that produces a distinct policy. Then, we extend this objective for training a population of diverse policies. Finally, we incorporate an MI objective that encourages each policy to learn local variations.
3.1 LEARNING A DISTINCT POLICY VIA POLICY COMPATIBILITY
In this section, we motivate our objective by looking at two joint policies: πA = (π1A, π 2 A) and πB = (π1B , π 2 B). The goal is for πA to learn a different behavior from πB via the compatibility criterion. Importantly, the compatibility criterion can be computed empirically without direct access to the trajectory distribution, which can be difficult to estimate. Under mild assumptions, we can simplify the setting such that a simple relationship between similarity measure and compatibility criterion emerges. By reasoning about the expected return under different pairs of policies, we derive our main result.
Theorem 3.1. If πiA is similar to πiB , then πiA is compatible with πB . (The proof is in App. A.) Corollary 3.2. If πiA is not compatible with πB , then πiA is not similar to πiB .
The result from Corollary 3.2 shows that we can find a policy πiA that is not similar to π i B by decreasing its compatibility with πB until they are incompatible, i.e., J(πiA, π j B) < (1− ϵ)JSP(πB). Additionally, we can ensure that πA learns a meaningful solution by maximizing JSP(πA). Assuming that πB has learned a solution and is fixed, the optimization objective of πA can be written as
max πA
JSP(πA) subject to J(πiA, π j B) < (1− ϵ)JSP(πB) ∀i, j ∈ {1, 2}, i ̸= j (1)
A way to solve such a constrained problem is to convert the constraints into regularization terms. For simplicity, we use a common λXP > 0 as a hyperparameter for the constraints. Then, we can write the soft objective of Eq. 1 as
max πA
JSP(πA)− λXPJXP(πA, πB) (2)
3.2 LEARNING A POPULATION OF DIVERSE POLICIES
To create a population of N diverse policies, P = {πA|1 ≤ A ≤ N}, we need an objective that requires each member of the population to have a different behavior relative to the rest of the population. We can write such an objective by expanding the XP term in Eq. (2) to include all other policies in the population. Additionally, we relax the assumption that other policies are fixed to
allow concurrent training of all policies. For a policy πA ∈ P , with an aggregation function fagg, its objective becomes
max πA
JLIPO(πA,P) = JSP(πA)− λXPJ̃XP(πA,P), (3)
where J̃XP(πA,P) = fagg(BxpA ), (4) BxpA = {JXP(πA, πB) | πB ∈ P−A}, (5) P−A = P\{πA} (6)
While using the average operation as the aggregation function is plausible, we find that using the max operation helps stabilize the training process and produces more diverse policies. We suspect that the average operation might produce many conflicting gradients and does not prioritize compatible XP pairs. We refer to JLIPO as the compatibility gap between a policy πA and a population P .
We can see that the compatibility gap objective only uses the expected return (JSP and J̃XP) and is insensitive to the state and action information. We argue that this distinction between LIPO and previous methods helps the agents discover more solutions in various situations (Sec. 4.1 and 4.5).
3.3 INDUCING VARIATIONS IN EACH POLICY
It is important to note that, regardless of the population size, there could be policies of role i that are compatible with πA ∈ P but not similar to πiA. We consider those policies to be variations of πiA and propose to capture such variations via an MI objective. Specifically, we condition π i A on a latent variable zi such that πA has the form of πA(a|τ) = E(z1,z2)π1A(a1|τ1, z1)π2A(a2|τ2, z2) where p(z1, z2) is a pre-defined prior distribution. We can induce variations of πiA by maximizing I({oi, ai}; zi), where I(·; ·) is the MI between two random variables. Intuitively, this objective encourages each policy to observe different observations and perform different actions given different values of the latent variable. However, maximizing I({oi, ai}; zi) directly is intractable, instead we optimize the variational lower bound of the MI (Jordan et al., 1999) (see App. B for the derivation)
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|oi, ai)], (7) where qϕA(z
i|oi, ai) is an approximation of the true posterior p(zi|oi, ai) parameterized by ϕA. So, maximizing I({o1, a1}; z1) and I({o2, a2}; z2) is an optimization problem that can be written as
max πA,ϕA
1
2 2∑ i=1 H(zi) + Ezi,(oi,ai) log qϕA(zi|oi, ai) (8)
In the previous work (Mahajan et al., 2019), shared z (i.e., z1 = z2) is used allowing both policies to collectively switch between different modes of behavior. However, LIPO uses independently sampled z as it utilizes z for a different purpose. Specifically, LIPO maximizes JLIPO to learn diverse solutions and optimizes the MI objective to learn variations of each solution. That is, the MI objective does not directly impact the diversity between different policies but increases variations of each individual policy. We note that the MI objective is optional; we show that without the MI objective, LIPO still produces diverse policies (Sec. 4.4).
3.4 IMPLEMENTATION
In practice, we modify the MI objective (Eq. 8) to be differentiable with respect to the policy πiA. Specifically, the variational posterior qϕA is modified such that, instead of a sampled action a
i, it takes the action distribution πiA(·|oi, zi) as an input, i.e., qϕA(z|o, πiA(·|oi, zi)). In contrast to previous MI-based approaches (Eysenbach et al., 2018; Sharma et al., 2019; Jiang & Lu, 2021; Lucas & Allen, 2022), we can optimize I({oi, ai}; zi) directly without computing an auxiliary reward (Mahajan et al., 2019; Osa et al., 2022). The loss function of the modified MI objective is
LMI(πA, ϕA) = − 1
2 2∑ i=1 Ezi,(oi,ai) log qϕA(zi|oi, πiA(·|oi, zi))) (9)
The objective of a policy πA in a population P becomes max πA,ϕA JLIPO(πA,P)− λMILMI(πA, ϕA) (10)
We set z as a discrete variable and use the uniform distribution for p(z1) and p(z2). At the beginning of each episode, each policy is given an independently sampled z that will be used until the end of the episode. We use MAPPO (Yu et al., 2021) for maximizing JSP and minimizing J̃XP. More details, including the pseudocode and the extension to more than two players, can be found in App. D.
4 EXPERIMENTS
We study the effectiveness of LIPO under three multi-goal cooperative environments in which both players must collectively choose to accomplish one of the available goals. We evaluate the diversity of a population based on the number of distinct goals achieved. We compare LIPO to other cooperative MARL methods that do not require domain knowledge to generate diverse agents. Our baselines are as follows: (i) Multi SP (multiple runs of self-play), (ii) SPMI (A single run of SP with added MI objective), (iii) MAVEN (Mahajan et al., 2019), and (iv) TrajeDi (Lupu et al., 2021). We also use Multi SPMI and Multi MAVEN as baselines by training SPMI and MAVEN multiple times. We also discuss on methods that utilize domain knowledge in Sec. 5.
4.1 DISCOVERING DIVERSE SOLUTIONS
We use two simple environments to study the effectiveness of various methods in discovering solutions: (i) One-Step Cooperative Matrix Game (CMG), in which there are many possible solutions, and (ii) Point Mass Rendezvous (PMR), a temporally extended cooperative navigation environment.
One-Step Cooperative Matrix Game (CMG): A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. By choosing the same solution, both players get a reward rm associated with the chosen solution. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H). For CMG-S, we set (M = 32, km = 8, rm = 0.5∗(1+ m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. An example payoff matrix is shown in Fig. 2a.
33300
33300
33300
00022
00022
0
0
0
0
0
000001
(a) (b) (c)
Figure 2: (a) The payoff matrix of a CMG game with (M = 3, km = m, rm = m). (b, c) The agents (orange) and landmark positions (blue) of PMR-C and PMR-L.
Point Mass Rendezvous (PMR): The environment is based on the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The goal of this environment is for the two agents to navigate to a landmark together. There are M = 4 landmarks, and we consider each landmark as a solution in this environment. This environment has two modes: PMR-C and PMR-L. In PMR-C, landmarks are distributed evenly on the circumference of a circle. Thus, all landmarks are equally easy to find and optimal. In PMR-L, landmarks are placed on a line. In this scenario, closer landmarks are easier to find. We define the population size |P| of each method as fol-
lows: For SPMI and MAVEN, |P| is equal to the number of dimensions of the latent variable, |z|. For Multi SPMI and Multi MAVEN, |P| = |z| · nseed where nseed is the number of random seeds and we use |z| = 8. For Multi SP, TrajeDi, and LIPO, |P| is the number of joint policies in the population. Results: Fig. 3 shows the numbers of learned solutions, averaged over three runs. In all environments, LIPO consistently discovers more solutions than the baselines, given the same population size. The baselines find fewer solutions in CMG-H and PMR-L than they do in CMG-S and PMR-C, whereas LIPO performs similarly across settings. LIPO is also better than the baselines at finding sub-optimal solutions in CMG-S. We note that Multi SP and TrajeDi perform almost ideally in PMR-C, where all solutions are equivalent, but perform worse in other settings. Also, Multi SPMI finds all four solutions in PMR when the population size is bigger than 8. However, it performs poorly in CMG. LIPO’s consistency across environments and settings demonstrates that LIPO is still effective when (i) many solutions exist, (ii) solutions are not equally optimal, and (iii) solutions are not equally likely to be found by random exploration. We also have experimented with stronger regularization coefficients for the baselines, which help the baselines discover more solutions. However, if the regularization coefficient is too large, they fail to produce capable policies.
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size D
is co
ve re
d s
o lu
ti o
ns
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
D is
co ve
re d
s o
lu ti
o ns
(a) CMG-S
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(b) CMG-H
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
l i I j i l i I l i
i
D is
co ve
re d
s o
lu ti
o ns
(c) PMR-C
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(d) PMR-L
Figure 3: Numbers of discovered solutions. Ideally, if the population size increases by one, one more solution should be discovered, as depicted by the dashed lines (assuming that a joint policy does not produce a multi-modal behavior).
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(a) PMR-C
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(b) PMR-L
Figure 4: Numbers of competent joint policies using various combinations of N (x-axis) and λXP (colors) in PMR.
4.2 TRADE-OFF BETWEEN COMPETENCY AND DISSIMILARITY OF JOINT POLICIES
It is possible that optimizing a regularized objective might incur training instability and create incapable policies. Here, we investigate the effect of different combinations of λXP and the population size (N ) on the competency of the policies.
Fig. 4 shows the number of competent joint policies when using different values of N and λXP in PMR. Particularly, in PMR, a joint policy is considered competent when both players stay close to a landmark at the end of an episode. We observe that when the population size is larger than the number of solutions (N > M ), some surplus policies do not learn to reach a goal. Importantly, the number of competent joint policies depends on the value of λXP: lower values of λXP yield more capable policies. However, using too low λXP will generate policies that share a common solution when N ≤ M as shown in App. J.1.1. Additionally, when N ≤ M , all trained agents are competent except when λXP is too high in PMR-L. These results suggest that there is a trade-off between the number of capable joint policies and policy dissimilarity. When using a larger population size, a small λXP should be used to avoid producing incompetent agents, while a bigger λXP should be used with smaller population sizes to ensure the dissimilarity between joint policies.
4.3 TRADE-OFF BETWEEN COMPUTATION COST AND DIVERSITY
Not only is using bigger values of N more likely to produce incompetent policies, but it is also computationally expensive. Formally, the computation complexity of approximating J̃XP(·,P) is O(Nnxp) where nxp is the number of XP pairs used to approximate J̃XP(πA,P). So, we investigate a way to reduce the cost of calculating J̃XP(πA,P) by reducing nxp. According to Eq. 5, the default value is nxp = N − 1. When nxp < N − 1, nxp policies are chosen randomly from P−A by sampling without replacement.
We observe that, while being computationally cheaper, using nxp < N − 1 tends to produce less diverse populations as shown in Fig. 5. Thus, nxp can be considered a hyperparameter
that controls the computation-diversity trade-off. However, as shown by the dashed lines, the effect of nxp on population diversity is less prominent in PMR-C, where solutions are equally likely to be found. We use nxp = N − 1 in all other experiments. See App. J.1.2 for results in CMG.
4.4 EFFECT OF THE MUTUAL INFORMATION OBJECTIVE
Fig. 6 shows the behaviors of the policies produced by LIPO with and without the MI objective in PMR-C. We can see the effect of the MI objective in the variety of the trajectories. Overall, each agent exhibits larger variations given a small MI regularization λMI = 0.5. This result aligns with our motivation of using the MI objective to learn variations of each solution. With or without the MI regularization, LIPO discovers all the landmarks with N = 4.
4.5 DISCOVERING RECIPES IN MULTI-RECIPE OVERCOOKED
Overcooked, a collaborative cooking game, has been used to study the cooperative ability of learned agents in prior works (Carroll et al., 2019; Charakorn et al., 2020; Strouse et al., 2021; McKee et al., 2022). To investigate the usefulness of LIPO in a high-dimensional environment, we implement a more complex version of the game based on the work of Wu et al. (2021); players have to complete and serve one of the six pre-defined recipes as fast as possible, as opposed to delivering a single menu item re-
peatedly. We emphasize that this environment is much more challenging than the ones in the previous experiments because of various aspects: First, it has a sparse reward signal. Second, there are multiple sub-tasks. Third, different recipes have different sub-tasks. Each of these characteristics of the environment complicates the process of finding diverse solutions. Furthermore, we note that recipes containing a carrot or a tomato are harder to complete than other recipes as they involve an additional coordination step. Particularly, carrot and tomato have to be sent over by the agent on the right, unlike lettuce and onion. Fig. 7 shows an overview of the game.
The goal in this experiment is to learn a population of behaviorally diverse agents. We choose to quantify the diversity of a population based on the entropy of its recipe distribution. For a population
P , we approximate the probability of recipe i being completed as P (recipei|P) ≈ ∑ πA mi(πA)∑
i ∑ πA mi(πA) ,
where mi(πA) denotes the frequency of recipe i under a joint policy πA. The recipe frequencies, {mi(πA)|1 ≤ i ≤ 6}, for each joint policy πA ∈ P are measured by counting the completed recipes from 1,000 self-play episodes. For Multi SP, TrajeDi and LIPO, we set N = 8. For Multi SPMI and Multi MAVEN, we use nseed = 8 and |z| = 8.
Results: Quantitatively, Tab. 1 shows that LIPO has the highest population recipe distribution entropy, averaged over five random seeds. This result indicates that LIPO populations use all recipes more uniformly than the baseline populations, even though some recipes take longer to complete or are harder to find by random exploration. The recipe distribution of populations produced by each method can be found in Fig. 8. We find that LIPO populations with λMI = 0 still, similar to λMI = 0.5, consistently learn to use the hard-to-find Tomato & Carrot Salad recipe. However, the frequencies of Chopped Tomato and Chopped Carrot are lowered (Fig. 8f). This means that there are multiple joint policies that learn to complete the same recipe while being incompatible with each other. We suspect that using λMI > 0 alleviates this problem by regularizing each joint policy to represent a policy with broader state-action coverage (e.g., learn multiple ways of completing a recipe), indirectly pushing other joint policies to use different recipes in order to be incompatible. Qualitatively, we can see in App. J.2 that the baselines produce agents with similar recipe frequencies. The resulting populations, thus, contain agents with a similar recipe preference. In contrast, LIPO produces agents with distinct recipe frequencies, collectively making the population more diverse than the baselines. We also visualize the behaviors learned by LIPO in App. J.3.
4.6 TRAINING GENERALIST AGENTS WITH GENERATED POPULATIONS
In addition to evaluating the diversity of agents in Overcooked, we quantify the usefulness of the produced agents by using them as training partners of a generalist agent and test the agent with held-out populations. Intuitively, more diverse training partners would enable the agent to generalize and coordinate with unseen agents better. Additionally, we include a population of six specialized SP policies where each policy is trained to complete a specific recipe by adjusting the reward function. The specialist population is created for evaluating the agent when the partner has a strong preference.
Fig. 9a shows that all generalist agents perform similarly when tested with held-out baseline populations. However, the agents trained with a baseline population perform poorly when matched with held-out LIPO and specialist populations. In contrast, those trained with a LIPO population perform better in both situations. Specifically, they have a significantly higher success rate when paired with the specialist with a strong preference for Tomato & Carrot Salad as shown in Fig. 9b. We attribute the success rate difference to the fact that this recipe has a lower completion probability in all except LIPO populations. As a result, generalist agents trained with a LIPO population perform better in terms of the overall success rate when tested with specialist agents. Overall, training with a LIPO population helps the generalist agents to better coordinate with more partner types as indicated by the harmonic means.
5 RELATED WORK
Learning a collection of diverse agents has been utilized in various contexts (Parker-Holder et al., 2020; Sun et al., 2020; Zahavy et al., 2021; Zhou et al., 2021). In the cooperative domain, Canaan et al. (2019; 2020) use the Quality Diversity (QD) algorithm (Mouret & Clune, 2015; Pugh et al., 2016) to produce a population of behaviorally diverse agents. QD, however, requires domain knowledge to encode different types of behaviors. For example, in CMG, the algorithm requires the mapping between actions and corresponding solutions. In Overcooked, it needs to know all possible recipes beforehand. Without such a domain knowledge, it would be difficult to use QD to produce a diverse population. TrajeDi (Lupu et al., 2021) produces a diverse population of agents based on the trajectory distribution. Finally, MEP (Zhao et al., 2021) trains a population of agents with an auxiliary reward based on population entropy. Like TrajeDi and MEP, ours does not require domain-specific knowledge. However, to promote behavioral diversity, LIPO utilizes the expected returns of different policy pairs as opposed to state-action information.
The idea of diversifying the empirical return has been explored in the context of finding diverse solutions in non-transitive competitive games (Liu et al., 2021; Balduzzi et al., 2019; Perez-Nieves et al., 2021). In particular, Liu et al. (2021) share some similar ideas with our work. They propose to use the expected returns, when encountering different opponents, and state-action information to promote diversity of agents. A concurrent work by Rahman et al. (2022) applies a similar idea of diversifying the expected joint return to generate diverse partners in cooperative settings. LIPO can be thought of as a special case designed specifically for cooperative environments (see App. E).
MI objectives have been used in RL to learn diverse behaviors (Eysenbach et al., 2018; Sharma et al., 2019; Kumar et al., 2020; Osa et al., 2022). In cooperative MARL, MAVEN (Mahajan et al., 2019) optimizes both RL and MI objectives to encourage the agents to explore in a committed manner and discover diverse solutions. Also, Any-play (Lucas & Allen, 2022) uses a similar objective to produce training partners with many solutions for a generalist agent. In contrast, our approach uses the MI objective to regularize each policy to learn local variations of each solution.
6 CONCLUSION
We propose LIPO, a simple and generic method that can create a population of diverse agents in cooperative multi-agent environments. Unlike previous work that uses state-action information from joint trajectories, LIPO utilizes the concept of policy compatibility to create diverse policies. This alternative view of quantifying diversity makes LIPO more robust to state and action spaces. Also, LIPO uses the MI objective to learn local variations of each solution. Empirically, LIPO consistently produces more diverse populations than the baselines across a variety of three multi-goal environments. Finally, in multi-recipe Overcooked, LIPO produces populations of diverse partners that help the generalist agents to generalize to unseen agents better. We include further discussions and limitations of LIPO in App. F and G.
ACKNOWLEDGEMENT
This work is partially supported by King Mongkut’s Institute of Technology Ladkrabang [2566-02- 06-002]. We thank Natchaya Sricom for drawing Fig. 1 and 7. We thank Supasorn Suwajanakorn, Sucha Supittayapornpong and Maytus Piriyajitakonkij for their suggestions on early draft versions. We also thank anonymous reviewers for their constructive feedbacks.
REPRODUCIBILITY STATEMENT
We have include additional information to reproduce the experimental results in the supplementary text:
• Environment details (App. C)
• Pseudocode and implementation details (App. D)
• Hyperparameters used in all experiments (App. H and I)
The source code is available at https://github.com/51616/marl-lipo.
A PROOF FOR THEOREM 3.1
We prove the relationship of similar policies and compatible policies in Theorem 3.1 under the following assumptions.
Assumption A.1 (All joint trajectories are supported by πB). P (τ |πiB , π j B) > 0; ∀τ ∈ T
Assumption A.2 (Shared ϵ). A common 0 ≤ ϵ ≤ 1 is used for Def. 2.1 and 2.2 Assumption A.3 (Positive return). G(τ) > 0;∀τ ∈ T Theorem. If πiA is similar to πiB , then πiA is compatible with πB .
Proof. Let r(τ) = P (τ |π i A,π j B)
P (τ |πiB ,π j B)
. Because πiA is similar to π i B , we know that 1− ϵ ≤ r(τ) ≤ 1+ ϵ; ∀τ
from Def. 2.1. Consequently, we can use importance sampling to write J(πiA, π j B) in relation to J(πiB , π j B):
J(πiA, π j B) = Eτ∼ρ(πiA,πjB)G(τ)
= ∫ τ P (τ |πiA, π j B)G(τ)
= ∫ τ P (τ |πiB , π j B) P (τ |πiB , π j B) P (τ |πiA, π j B)G(τ)
= ∫ τ r(τ)P (τ |πiB , π j B)G(τ) = Eτ∼ρ(πiB ,πjB)r(τ)G(τ)
Because 1 − ϵ ≤ r(τ) ≤ 1 + ϵ and G(τ) > 0;∀τ ∈ T , the expected return J(πiA, π j B) has the following upper and lower bounds:
(1− ϵ)Eτ∼ρ(πiB ,πjB)G(τ) ≤J(π i A, π j B) ≤ (1 + ϵ)Eτ∼ρ(πiB ,πjB)G(τ)
(1− ϵ)JSP(πB) ≤J(πiA, π j B) ≤ (1 + ϵ)JSP(πB)
This means that πiA is compatible with πB (Def. 2.2).
∴ If πiA is similar to π i B , then π i A is compatible with πB .
Remark: Assumption A.3 can be satisfied by offsetting the joint return of all trajectories such that minτ G(τ) > 0. However, since we use MAPPO as the base algorithm, the expected return is subtracted by a baseline to compute the advantage during the policy update, which removes the effect of the offset. In practice, even under environments that do not satisfy G(τ) > 0, LIPO can still discover diverse solutions effectively, as shown in the experiments.
B DERIVATION OF THE LOWER BOUND OF THE MI OBJECTIVE
We provide derivation of Eq. 7 here. Let p(zi|{oi, ai}) be the true posterior of zi and qϕ be the approximation of p parameterized by ϕ. The lower bound of I({oi, ai}|zi) can be derived as follows:
I({oi, ai}; zi) = H(zi)−H(zi|{oi, ai}) = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})− log qϕ(zi|{oi, ai}) + log qϕ(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log qϕ(zi|{oi, ai})] + KL(p(zi|{oi, zi})||qϕ(zi|{oi, zi}))
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|{oi, ai})]
C ADDITIONAL ENVIRONMENT DETAILS
C.1 ONE-STEP COOPERATIVE MATRIX GAME
A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. The game is stateless and terminate immediately after both players simultaneously choose an action. By choosing the same solution, both players get a reward rm associated with the chosen solution. This means that the solutions are not equally optimal if the values in {rm} are not identical. Similarly, if the values in {km} are not identical then the solutions are not equally likely to be chosen by a uniform joint policy. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H).
For CMG-S, we set (M = 32, km = 8, rm = 0.5 ∗ (1 + m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. There are 32 solutions, each with 8 compatible actions. There are 32× 8 = 256 possible actions for each player. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. There are 32 solutions, each with different number of compatible actions, ranging from 1 to 32. The number of available actions for each player is ∑m=32 m=1 m = 528.
C.2 POINT MASS RENDEZVOUS (PMR)
PMR is based on the the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The observation of each agent includes absolute position, current velocity, and the relative distance to the landmarks and the other agent. These features are concatenated as a 1-D vector of length 14. The possible actions are: no op, move, {up, down, left, right}. In PMR-C, the start positions of the agents are {(0.3,0), (-0.3,0)} and the landmarks positions are {(1.59, 1.59), (1.59, -1.59), (-1.59, 1.59), (-1.59, -1.59)}. For PMR-L, the start and the landmark positions are {(1,0),(0,1)} and {(0,2.25),(0,0.75),(0,-0.75),(0,-2.25)}. An episode will be terminated after 50 timesteps. The agents are incentivized to go to the same landmark and stay close together with the reward function
rt = 1− d(pi, c)−min l∈L d(l, c),
where d(·, ·) is the euclidean distance between two points, pi is the 2-d coordinate of agent i, c is the average coordinate of all agents, and L is the set of all landmarks.
C.3 MULTI-RECIPE OVERCOOKED
We implement a multi-recipe of the game based on the work of Wu et al. (2021). In this version of Overcooked, there are four ingredients: lettuce, onion, tomato, and carrot. The ingredients are randomly placed at pre-defined positions in the layout. Particularly, the lettuce and the onion are randomly placed on the left or the middle counter. The tomato and the carrot are randomly placed on the right or the middle counter. These ingredients can be composed into different recipes making each ingredient unique: four recipes (LettuceSalad, TomatoSalad, ChoppedCarrot, ChoppedOnion) require only a single ingredient, while the other two (TomatoLettuceSalad, TomatoCarrotSalad) require two ingredients. The ingredients have to be chopped at the chopping station before placing on the plate. After the required ingredients are put on the plate, they must be delivered to the delivery station.
Both players have the same egocentric observation and action spaces. The observation is a set of hand-crafted features that represent a local view of the environment. Specifically, we use the following features: absolute position and facing direction, relative distance to the objects and the other agent, state of the ingredients, four booleans indicating if the agent is next to a counter in four cardinal positions, currently held items, the state of the held foods, and the type and state of the items in front of the agent. These features are concatenated as a 1-D vector of length 54. At every timestep, each player has to choose one of the six possible actions: no op, move {up, down, left, right}, and interact. An episode lasts at most 200 timesteps and terminates immediately after a successful delivery. An episode without delivery is considered unsuccessful. We incentivize the agents to interact with the
objects and deliver as fast as possible with the following reward function:
rt = rinteract + rprogress + rcomplete − p,
here rinteract is a shaped reward given when an agent interacts with an object for the first time in an episode, rprogress is given when the players progress toward a recipe completion (i.e., chopping required ingredients or putting chopped ingredients on the plate), rcomplete is given upon successful delivery, and p is a penalty. We use rinteract = 0.5, rprogress = 1.0, rcomplete = 10, and p = 0.1. We note that recipes with more than one ingredient will give only slightly higher rewards (rinteract + rprogress) but are significantly harder to be discovered by random exploration than those with one ingredient.
Additional experimental details: For specialist agents, the rewards are given when interacting, progressing, or completing a specific recipe. For the held-out populations, we remove incompetent policies with the expected return of less than zero from the test populations created by TrajeDi and LIPO. We do not remove those in the training populations. We do this because testing with an incompetent policy does not give any meaningful information, as almost all episodes will be unsuccessful.
D IMPLEMENTATION DETAILS
Algorithm 1: Training process of LIPO (on-policy) This pseudocode is based on self-play. Blue text is related to the MI objective. LIPO specific code is highlighted in green. Input: A Population P = {πA |1 ≤ A ≤ N}, the number of XP pairs used to approximate J̃xp(πA,P)
(nxp), the number of players in an episode (m), and the number of SP and XP episodes per iteration (ESP and EXP).
while not done do for A ∈ {1, ..., N} do Bsp ← GetEpisodeRollouts(πA, πA, ESP,m) Compute JSP(πA) using Bsp Bxp ← GetCrossPlayRollouts(πA,P, nxp, EXP,m) Compute J̃XP(πA,P) using Bxp (Eq. 4) Compute LMI using Bsp and Bxp (Eq. 9) θA ← θA −∇θA [−JSP+λXPJ̃XP+λMILMI] ϕA ← ϕA − λMI∇ϕALMI
Algorithm 2: Rollout collection functions Function GetEpisodeRollouts(πA, πB , E): B ← {} for i ∈ {1, ...,m} do
for episode ∈ {1, .., E m } do
z1, ..., zm ∼ p(z1, ..., zm) zj ← { zk }m k ̸=i πj(·|·, zj) = Πk ̸=iπk(·|·, zk) τ ∼ ρ(πiA(·|·, zi), πjB(·|·, z
j)) B ← B ∪ {τ}
return B Function GetCrossPlayRollouts(πA,P, nxp, EXP): Bxp ← {} P ′−A ← SampleWithoutReplacement(P−A, nxp) for πB ∈ P ′−A do B ← GetEpisodeRollouts(πA, πB , EXP|P′−A|) Bxp ← Bxp ∪ B
return Bxp
The pseudocode for LIPO is shown in Algorithm 1 and 2. If there are more than two players (m > 2), πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ
k(akt |τkt ). We note that scaling LIPO to more than two players does not increase the training time. It is the same as the two-player setting as long as the numbers of SP and XP episodes are the same. In practice,
however, more XP episodes might be needed to better estimate J̃XP. We use the parameter sharing technique for better sample efficiency and faster convergence (Tan, 1993; Foerster et al., 2018; Rashid et al., 2018). Assuming that a policy πiA is a neural network parameterized by θ i A, this means that for a joint policy (π1A, π 2 A), we have θ 1 A = θ 2 A. Still, π 1 A and π 2 A can behave differently as they observe different parts of the environment and have a different player indicator concatenated with their local observations.
All methods are implemented on top of MAPPO except MAVEN. The critic, policy, and discriminator are feed-forward neural networks with two hidden layers, each having 64 units. For a fair comparison, we use the same or more environment steps in the policy update of the baselines compared to LIPO. Common hyperparameters of methods based on MAPPO are shown in Table. 2.
D.1 MAPPO
MAPPO is the base MARL algorithm for all baselines except MAVEN. The policy parameters are shared among all policies. The critic takes a state of the environment and outputs an expected return of a given global state. The global state is provided by the environment and only used during training. For the complete training objectives of MAPPO, we refer the reader to Appendix A of Yu et al. (2021).
D.2 MULTI SP
A simple but effective way to produce diverse agents by training multiple SP agents with different neural network initializations and random seeds. Specifically, each run produces a joint policy πA that maximizes JSP(πA) using MAPPO.
D.3 SPMI
A single run of SP agent trained with added MI objective I(z|oi, ai). SPMI uses a shared z for both policies and considers each z as a different joint policy. We train SPMI using the same training procedure as LIPO by setting N = 1, λXP = 0 and z1 = z2. The discriminator takes a local
observation oi and action distribution πi(·|oi) as inputs and outputs the discrete probability of the latent variable. The latent variable of all policies is shared during an episode.
D.4 MAVEN
MAVEN (Mahajan et al., 2019) is explicitly designed for learning diverse solutions in cooperative multi-agent environments. A joint policy is represented as π(·|τ, z), and each mode of behavior is represented by the latent variable z. Similar to SPMI, MAVEN uses a shared z for all policies. We use the same network architecture presented in Mahajan et al. (2019) with recurrent neural networks. However, we do not use the hierarchical policy but sample z from the uniform distribution. The latent variable of all policies is shared.
D.5 MULTI SPMI AND MULTI MAVEN
A population containing joint policies from multiple runs of SPMI and MAVEN. Like Multi SP, each run has different neural network initializations and random seeds. The population size is |P| = nseed|z|, where nseed is the number of runs. Notably, this baseline uses the training data differently from the base algorithms. Instead of training a long single run, this approach allows the policy to ”restart” by using different initialization of neural networks. For example, training a single run with |z| = N,nseed = 1 might not discover as many solutions as training nseed runs with |z| = Nnseed even though the population size is the same. Empirically, we find that multiple shorter runs can find more solutions compared to a single long run of the corresponding algorithm. Thus, we omit the results of the base algorithms in multi-recipe Overcooked.
D.6 TRAJEDI
TrajedDi produces a population of diverse agents that also maximize the expected return in cooperative environments. The diversity measure of this method is based on the Jensen-Shannon divergence (JSD) between the trajectory distribution of each policy. Different from the original implementation, we remove the best-response (BR) policy from the population. Since the BR policy might work well with only a subset of solutions, removing BR potentially increase the number of variations in the population. Our modified loss is:
L = −[ N∑
A=1
(JSP(πA)) + αJSDγ(π1, ..., πN )], (11)
where JSD is the proposed diversity objective of TrajeDi, and α and γ are the hyperparameters of TrajeDi.
D.7 LIPO
LIPO uses the same implementation as SPMI except LIPO uses independent latent variable for each policy and λXP > 0. Additionally, LIPO uses extra critics for the XP trajectories. In total, LIPO has an SP critic V πAsp and N − 1 XP critics {V πA,πBxp | πB ∈ P−A}. In each training iteration, LIPO collects SP and XP trajectories of all policy combinations. MAPPO is used for both maximizing JSP and minimizing J̃XP. The critics are trained using SP trajectories and all of XP trajectories, while the policy is trained using SP trajectories and XP trajectories from the XP pair that has the highest joint return, max(Bxp).
D.8 GENERALIST AGENT
The policy and critic networks of a generalist agent use two 256-unit GRU layers (Cho et al., 2014) followed by a linear layer. The input also includes the reward and action of the previous timestep. We use MAPPO for training a generalist agent. We train both the policy and critic with the batch size of 320,000 timesteps using truncated backpropagation through time (BPTT). The samples are reused for 15 epochs. Each minibatch contains 1,600 sequences with a maximum length of 50 timesteps. We also use learning rate annealing, specifically the generalist agent starting from 0.005 to 0.003 with linear scheduling. Other hyperparameters are shared with other methods (Tab. 2). A training
partner for a generalist agent is sampled uniformly from the training population at the beginning of an episode.
E RELATIONSHIP WITH RAHMAN ET AL. (2022)
Rahman et al. (2022) propose to optimize the self-play returns while maximizing the diversity term Div(C), where C is a N ×N cross-play payoff matrix. Specifically, they propose to learn diverse policies via the following objective:
max C Tr(C) + Div(C) (12)
They propose to use Div(C) = Det(κ(C)) where κ(C) is an N × N matrix with κi,j(C) being similarity between policy πi and πj . The radial basis function (RBF) kernel of the empirical returns is used to measure the similarity between two policies:
κi,j(C) = exp(− ||Ci,· − Cj,·||2
σ2 ), (13)
where Ci,· is row ith of C. In other words, Ci,· is the vector containing the empirical return of πi when matched with other policies in the population. Intuitively, this objective diversifies the policies via the expected returns similar to LIPO. Using the same notation of the cross-play matrix, we can write the objective of training a LIPO population as:
max C Tr(C)− λXP ∑ i max 1≤j≤N
i ̸=j
Ci,j (14)
This objective wants the diagonal (JSP) to be maximized and the off-diagonal entries of the cross-play matrix (J̃XP) to be minimized. This is a special case of Eq. 12 where Div(C) is based on policy compatibility.
F DISCUSSIONS
Agents trained with LIPO are incentivized to act adversarially toward agents that behave differently from itself. This behavior might not be desirable for certain downstream tasks. For example, agents produced by LIPO might not be suitable for interacting with humans as they would refuse to conform with the user. However, as shown in Sec. 4.6, training a generalist agent with these agents would have the opposite effect: the generalist agent would try to comply with the current partner’s preference.
LIPO produces a population of near-optimal solutions, a generalist agent trained with a LIPO population might not coordinate well with significantly sub-optimal agents. In a prior work, Strouse et al. (2021) show that augmenting the training population with past checkpoints (FCP) helps the trained generalist agent to effectively coordinate with sub-optimal agents. Since LIPO and FCP are orthogonal, populations created by LIPO can be also augmented in the same way as FCP.
Previous works find incompatible policies to be undesirable since they are generally results of coordinated symmetry breaking (Bard et al., 2020; Hu et al., 2020; 2021); these policies perform poorly when interacting with unseen partners. However, we show that learning incompatible policies can be useful for generating behaviorally diverse agents in various scenarios. The produced agents can then be used as training partners for a generalist agent. Using LIPO in environments where many solutions are equivalent may produce such undesirable symmetry-breaking conventions. We believe that LIPO can be combined with other techniques, e.g., other-play (Hu et al., 2020) and equivariant coordinator (Muglich et al., 2022), to avoid learning arbitrary symmetry-breaking. We leave the study of the combination of LIPO and these techniques for future work. A concurrent work by Cui et al. (2023) proposes an extension of LIPO by combining insights from off-belief learning (Hu et al., 2021) to avoid ”sabotaging” behavior of LIPO agents.
G LIMITATIONS
LIPO requires an additional hyperparameter λXP. If λXP is too big, it is possible that the main RL objective, JSP, would be interfered which will result in an incompetent joint policy (Sec. 4.2). An
adaptive mechanism that selects a suitable value for λXP at different stages of training could help increase training stability.
Although LIPO can be fully parallelized, it requires more computation than the baselines to get an accurate approximation of J̃XP, which makes it harder to scale up to bigger population sizes. Instead of collecting all policy pairs, sampling a portion of policy pairs to approximate J̃XP could reduce computation cost and training time at a potential cost of diversity (Sec. 4.3). Instead of using a uniform sampling, a mechanism that selects the best pair to sample (e.g., bandit algorithm) might help mitigate the diversity loss from using lower nxp.
In this work, we only investigate the effectiveness of LIPO under a new set of cooperative environments with multiple discrete solutions. Investigating LIPO in other well-known environments where the solution space might be continuous, e.g., continuous control (Peng et al., 2021), or environments with only a single goal, e.g., SMAC (Samvelyan et al., 2019), might yield interesting results and pose different challenges. We leave this further investigation as our future work.
H HYPERPARAMETERS (CMG AND PMR)
We provide the searched values of each method in Tab. 3, 4, 5, 6, 7 and 8. The hyperparameters are searched individually for each population size. We use three random seeds for each set of hyperparameters. We do not use any validation method. Instead, we present the results using the best parameters in the main paper. For LIPO, we set λMI as 0.0 and 0.5 in CMG and PMR, respectively.
I HYPERPARAMETERS (OVERCOOKED)
For each method, we use the parameters that give the highest entropy to generate five populations for Sec. 4.5, and Sec. 4.6. The searched values of each method are:
• TrajeDi: We perform a grid search with following hyperparameters: α ∈ {5, 10} and γ ∈ {0, 0.5}. We use α = 5 and γ = 0.5 for the results in the paper.
• Multi SPMI: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• Multi MAVEN: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• LIPO: We perform a grid search with following hyperparameters: λXP ∈ {0.2, 0.3} and λMI ∈ {0.1, 0.5}. We use λXP = 0.3 and λMI = 0.5 for the results in the paper.
J ADDITIONAL RESULTS
J.1 ADDITIONAL ABLATION RESULTS
We provide additional results of numbers of learned solutions with varying N and λXP (Fig. 10), numbers of competent agents in CMG with varying N and λXP (Fig. 11), and numbers of learned solutions with varying nxp in CMG (Fig. 12) here. These results are consistent with the analysis presented in Sec. 4.2 and 4.3.
J.1.1 ADDITIONAL RESULTS WITH VARYING N AND λXP
J.1.2 ADDITIONAL RESULTS WITH VARYING nxp
J.2 RADAR PLOTS OF RECIPE FREQUENCIES
Fig. 13 shows recipe frequencies of the population with highest recipe entropy (out of five runs) from each method. The frequencies are calculated based on completed recipe from 1,000 self-play episodes of each agent as described in Sec. 4.5. Qualitatively, we can see that agents in a LIPO
population have more distinct recipe frequencies. That is, agents are different from each other in terms of recipe preference.
J.3 VISUALIZATION OF BEHAVIORS
We visualize the behaviors joint policies produced by LIPO in PMR and Overcooked at https: //sites.google.com/view/iclr-lipo-2023. Here, we show snapshots of four joint policies that have a distinct recipe preference in Overcooked. | 1. What is the main contribution of the paper in cooperative multi-agent reinforcement learning?
2. What are the strengths and weaknesses of the proposed LIPO framework?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Can the LIPO approach handle significantly sub-optimal partners in real-world settings?
5. Is there a guarantee that the space of behaviors represented by a single policy parameterized by a latent state will represent a unique class of solutions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper presents the LIPO framework for learning diverse sets of policies for cooperative multi-agent reinforcement learning tasks. Given an existing population consisting of tuples of joint policies, the LIPO objective maximizes the performance of a new joint policy, while penalizing it for performing well when paired with other policies within the population. LIPO also, optionally, maximizes the diversity of the trajectories generated by an individual team of policies using policies parameterized by a randomly sampled latent variable, and an additional a mutual information objective.
Experiments demonstrate that LIPO is better than several recently developed alternatives at identifying the space of solutions to cooperative games, including simple matrix games, the cooperative particle environments, and the much more complex Overcooked! environment. They demonstrate that LIPO is able to more reliably identify the full set of possible solutions, rather than just a subset of these solutions. The also evaluate "generalist" policies trained against the population of policies generated with LIPO (or one of the baseline methods). They demonstrate that policies trained against the LIPO population were significantly more robust (compared to the baseline methods) in terms of their performance when paired with members of a "test" populations.
Strengths And Weaknesses
The main strength of this paper is the intuitive nature of the LIPO approach, and the empirically demonstrated performance in both solution enumeration and as the basis for training generalist cooperative policies.
One potential weakness of LIPO, and related approaches, is the fact that they focus on enumerating the space of near-optimal solutions. This means that generalist policies trained against these populations may perform poorly when paired with significantly sub-optimal partners. In real-world settings, using LIPO-like methods could therefore lead to policies that are not robust to realistic partners who are unlikely to behave in a truly optimal way.
Another potential weakness is that there is no guarantee that the space of behaviors represented by a single policy parameterized by a latent state will represent a unique class of solutions. Without the ability to recognize unique solutions, LIPO may not be able to recognize solutions that are unlikely to occur in a ad hoc setting (since other agents would not choose strategies that are likely to mis-coordinate).
Clarity, Quality, Novelty And Reproducibility
The paper is well written, and has no obvious technical faults. The approach to policy diversity is relatively novel, particularly the idea of explicitly training new policies to mis-coordinate with existing members of the population. The authors provide their code, though this has note been checked. |
ICLR | Title
Generating Diverse Cooperative Agents by Learning Incompatible Policies
Abstract
Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task’s goal, the diversified agents are not guided to find other important, albeit sub-optimal, solutions: the agents might learn only variations of the same solution. In this work, we propose to learn diverse behaviors via policy compatibility. Conceptually, policy compatibility measures whether policies of interest can coordinate effectively. We theoretically show that incompatible policies are not similar. Thus, policy compatibility—which has been used exclusively as a measure of robustness—can be used as a proxy for learning diverse behaviors. Then, we incorporate the proposed objective into a population-based training scheme to allow concurrent training of multiple agents. Additionally, we use state-action information to induce local variations of each policy. Empirically, the proposed method consistently discovers more solutions than baseline methods across various multi-goal cooperative environments. Finally, in multi-recipe Overcooked, we show that our method produces populations of behaviorally diverse agents, which enables generalist agents trained with such a population to be more robust.
1 INTRODUCTION
Cooperating with unseen agents (e.g., humans) in multi-agent systems is a challenging problem. Current state-of-the-art cooperative multi-agent reinforcement learning (MARL) techniques can produce highly competent agents in cooperative environments (Kuba et al., 2021; Yu et al., 2021). However, those agents are often overfitted to their training partners and cannot coordinate with unseen agents effectively (Carroll et al., 2019; Bard et al., 2020; Hu et al., 2020; Mahajan et al., 2022).
The problem of working with unseen partners, i.e., ad-hoc teamwork problem (Stone et al., 2010), has been tackled in many different ways (Albrecht & Stone, 2018; Carroll et al., 2019; Shih et al., 2020; Gu et al., 2021; Rahman et al., 2021; Zintgraf et al., 2021; He et al., 2022; Mirsky et al., 2022; Parekh et al., 2022). These methods allow an agent to learn how to coordinate with unseen agents and, sometimes, humans. However, the success of these methods depends on the quality of training partners; it has been shown that the diversity of training partners is crucial to the generalization of the agent (Charakorn et al., 2021; Knott et al., 2021; Strouse et al., 2021; McKee et al., 2022; Muglich et al., 2022). In spite of its importance, obtaining a diverse set of partners is still an open problem.
The simplest way to generate training partners is to use hand-crafted policies (Ghosh et al., 2020; Xie et al., 2021; Wang et al., 2022), domain-specific reward shaping (Leibo et al., 2021; Tang et al., 2021; Yu et al., 2023), or multiple runs of the self-play training process (Grover et al., 2018; Strouse et al., 2021). These methods, however, are not scalable nor guaranteed to produce diverse behaviors. Prior works propose techniques aiming to generate diverse agents by changing the state visitation and action distributions (Lucas & Allen, 2022), or joint trajectory distribution of the agents (Mahajan et al., 2019; Lupu et al., 2021). However, as discussed by Lupu et al. (2021), there is a potential drawback of using such information from trajectories to diversify the behaviors. Specifically, agents that make locally different decisions do not necessarily exhibit different high-level behaviors.
To avoid this potential pitfall, we propose an alternative approach for learning diverse behaviors using information about the task’s objective via the expected return. In contrast to previous works that
use joint trajectory distribution to represent behavior, we use policy compatibility instead. Because cooperative environments commonly require all agents to coordinate on the same solution, if the agents have learned different solutions, they cannot coordinate effectively and, thus, are incompatible. Consequently, if an agent discovers a solution that is incompatible with all other agents in a population, then the solution must be unique relative to the population. Based on this reasoning, we introduce a simple but effective training objective that regularizes agents in a population to find solutions that are compatible with their partner agents but incompatible with others in the population. We call this method “Learning Incompatible Policies” (LIPO).
We theoretically show that optimizing the proposed objective will yield a distinct policy. Then, we extend the objective to a population-based training scheme that allows concurrent training of multiple policies. Additionally, we utilize a mutual information (MI) objective to diversify local behaviors of each policy. Empirically, without using any domain knowledge, LIPO can discover more solutions than previous methods under various multi-goal settings. To further study the effectiveness of LIPO in a complex environment, we present a multi-recipe variant of Overcooked and show that LIPO produces behaviorally diverse agents that prefer to complete different cooking recipes. Experimental results across three environments suggest that LIPO is robust to the state and action spaces, the reward structure, and the number of possible solutions. Finally, we find that training generalist agents with a diverse population produced by LIPO yields more robust agents than training with a less diverse baseline population. See our project page at https://bit.ly/marl-lipo
2 PRELIMINARIES
Our main focus lies in fully cooperative environments modeled as decentralized partially observable Markov decision processes (Dec-POMDP, Bernstein et al. (2002)). In this work, we start our investigation in the two-player variant. A two-player Dec-POMDP is defined by a tuple (S,A1,A2,Ω1,Ω2, T,O, r, γ,H), where S is the state space, A ≡ A1 ×A2 and Ω ≡ Ω1 × Ω2 are the joint-action and joint-observation spaces of player 1 and player 2. The transition probability from state s to s′ after taking a joint action (a1, a2) is given by T (s′|s, a1, a2). O(o1, o2|s) is the conditional probability of observing a joint observation (o1, o2) under state s. All players share a common reward function r(s, a1, a2), γ is the reward discount factor and H is the horizon length.
Players, with potentially different observation and action spaces, are controlled by policy π1 and π2. At each timestep t, the players observe ot = (o1t , o 2 t ) ∼ O(o1t , o2t |st) under state st ∈ S and produce a joint action at = (a1t , a 2 t ) ∈ A sampled from the joint policy π(at|τt) = π1(a1t |τ1t )π2(a2t |τ2t ), where τ1t and τ 2 t contain a trajectory history until timestep t from the perspective of each agent. All players receive a shared reward rt = r(st, a1t , a 2 t ). The return of a joint trajectory
τ = (o0, a0, r0, ..., rH−1, oH) ∈ T ≡ (Ω × A × R)H can be written as G(τ) = ∑H t=0 γ trt. The expected return of a joint policy (π1, π2) is J(π1, π2) = Eτ∼ρ(π1,π2)G(τ), where ρ(π1, π2) is the distribution over trajectories of the joint policy (π1, π2) and P (τ |π1, π2) is the probability of τ being sampled from a joint policy (π1, π2).
We use subscripts to denote different joint policies and superscripts to refer to different player roles. For example, πA = (π1A, π 2 A) is a different joint policy from πB = (π 1 B , π 2 B), and π i A and πjA are policies of different roles. 1 Finally, we denote the expected joint return of self-play (SP) trajectories—where both policies are part of the same joint policy, πA—as JSP(πA) := J(π1A, π 2 A) and the expected joint return of cross-play (XP) trajectories—where policies are chosen from different joint policies, πA and πB—as JXP(πA, πB) := J(π1A, π 2 B) + J(π 1 B , π 2 A).
Since we are interested in creating distinct policies for any Dec-POMDP, we need an environmentagnostic measure that captures the similarity of policies. First, we consider a measure that can compute the similarity between policies of the same role i, e.g., πiA and π i B . We can measure this with the probability of a joint trajectory τ produced by either πiA or π i B . However, in the two-player setting, we need to pair these policies with a reference policy πjref. Specifically, π i A and π i B are considered similar if they are likely to produce the same trajectories when paired with an arbitrary reference policy πjref. We define similar policies as follows:
1Note that LIPO can be applied to environments with more than two players with a slight modification. Specifically, a policy πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ k(akt |τkt ).
Definition 2.1 (Similar policies). Considering two policies of the same role i, πiA and πiB , and a reference policy πjref of a different role j, π i A is similar to π i B up to ϵ if and only if maxτ∈T |1 − P (τ |πiA,π j ref) P (τ |πiB ,π j ref) | ≤ ϵ, where 0 ≤ ϵ ≤ 1.
Next, we consider an alternate view on assessing the similarity between policies using policy compatibility (Section 3). Policy compatibility measures the performance difference of a joint policy πB before and after one of its policies πiB is substituted by another policy π i A. We define compatibility between a policy πiA and a joint policy πB as follows:
Definition 2.2 (Compatible policies). Given a policy πiA and a joint policy πB , πiA is compatible with πB if and only if J(πiA, π j B) ≥ (1− ϵ)JSP(πB).
3 LEARNING INCOMPATIBLE POLICIES (LIPO)
Our goal is to create distinct policies and, therefore, a population of diverse agents. First, we theoretically show that policy compatibility can be used to identify whether two policies are different. Based on this observation, we propose a novel training objective that produces a distinct policy. Then, we extend this objective for training a population of diverse policies. Finally, we incorporate an MI objective that encourages each policy to learn local variations.
3.1 LEARNING A DISTINCT POLICY VIA POLICY COMPATIBILITY
In this section, we motivate our objective by looking at two joint policies: πA = (π1A, π 2 A) and πB = (π1B , π 2 B). The goal is for πA to learn a different behavior from πB via the compatibility criterion. Importantly, the compatibility criterion can be computed empirically without direct access to the trajectory distribution, which can be difficult to estimate. Under mild assumptions, we can simplify the setting such that a simple relationship between similarity measure and compatibility criterion emerges. By reasoning about the expected return under different pairs of policies, we derive our main result.
Theorem 3.1. If πiA is similar to πiB , then πiA is compatible with πB . (The proof is in App. A.) Corollary 3.2. If πiA is not compatible with πB , then πiA is not similar to πiB .
The result from Corollary 3.2 shows that we can find a policy πiA that is not similar to π i B by decreasing its compatibility with πB until they are incompatible, i.e., J(πiA, π j B) < (1− ϵ)JSP(πB). Additionally, we can ensure that πA learns a meaningful solution by maximizing JSP(πA). Assuming that πB has learned a solution and is fixed, the optimization objective of πA can be written as
max πA
JSP(πA) subject to J(πiA, π j B) < (1− ϵ)JSP(πB) ∀i, j ∈ {1, 2}, i ̸= j (1)
A way to solve such a constrained problem is to convert the constraints into regularization terms. For simplicity, we use a common λXP > 0 as a hyperparameter for the constraints. Then, we can write the soft objective of Eq. 1 as
max πA
JSP(πA)− λXPJXP(πA, πB) (2)
3.2 LEARNING A POPULATION OF DIVERSE POLICIES
To create a population of N diverse policies, P = {πA|1 ≤ A ≤ N}, we need an objective that requires each member of the population to have a different behavior relative to the rest of the population. We can write such an objective by expanding the XP term in Eq. (2) to include all other policies in the population. Additionally, we relax the assumption that other policies are fixed to
allow concurrent training of all policies. For a policy πA ∈ P , with an aggregation function fagg, its objective becomes
max πA
JLIPO(πA,P) = JSP(πA)− λXPJ̃XP(πA,P), (3)
where J̃XP(πA,P) = fagg(BxpA ), (4) BxpA = {JXP(πA, πB) | πB ∈ P−A}, (5) P−A = P\{πA} (6)
While using the average operation as the aggregation function is plausible, we find that using the max operation helps stabilize the training process and produces more diverse policies. We suspect that the average operation might produce many conflicting gradients and does not prioritize compatible XP pairs. We refer to JLIPO as the compatibility gap between a policy πA and a population P .
We can see that the compatibility gap objective only uses the expected return (JSP and J̃XP) and is insensitive to the state and action information. We argue that this distinction between LIPO and previous methods helps the agents discover more solutions in various situations (Sec. 4.1 and 4.5).
3.3 INDUCING VARIATIONS IN EACH POLICY
It is important to note that, regardless of the population size, there could be policies of role i that are compatible with πA ∈ P but not similar to πiA. We consider those policies to be variations of πiA and propose to capture such variations via an MI objective. Specifically, we condition π i A on a latent variable zi such that πA has the form of πA(a|τ) = E(z1,z2)π1A(a1|τ1, z1)π2A(a2|τ2, z2) where p(z1, z2) is a pre-defined prior distribution. We can induce variations of πiA by maximizing I({oi, ai}; zi), where I(·; ·) is the MI between two random variables. Intuitively, this objective encourages each policy to observe different observations and perform different actions given different values of the latent variable. However, maximizing I({oi, ai}; zi) directly is intractable, instead we optimize the variational lower bound of the MI (Jordan et al., 1999) (see App. B for the derivation)
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|oi, ai)], (7) where qϕA(z
i|oi, ai) is an approximation of the true posterior p(zi|oi, ai) parameterized by ϕA. So, maximizing I({o1, a1}; z1) and I({o2, a2}; z2) is an optimization problem that can be written as
max πA,ϕA
1
2 2∑ i=1 H(zi) + Ezi,(oi,ai) log qϕA(zi|oi, ai) (8)
In the previous work (Mahajan et al., 2019), shared z (i.e., z1 = z2) is used allowing both policies to collectively switch between different modes of behavior. However, LIPO uses independently sampled z as it utilizes z for a different purpose. Specifically, LIPO maximizes JLIPO to learn diverse solutions and optimizes the MI objective to learn variations of each solution. That is, the MI objective does not directly impact the diversity between different policies but increases variations of each individual policy. We note that the MI objective is optional; we show that without the MI objective, LIPO still produces diverse policies (Sec. 4.4).
3.4 IMPLEMENTATION
In practice, we modify the MI objective (Eq. 8) to be differentiable with respect to the policy πiA. Specifically, the variational posterior qϕA is modified such that, instead of a sampled action a
i, it takes the action distribution πiA(·|oi, zi) as an input, i.e., qϕA(z|o, πiA(·|oi, zi)). In contrast to previous MI-based approaches (Eysenbach et al., 2018; Sharma et al., 2019; Jiang & Lu, 2021; Lucas & Allen, 2022), we can optimize I({oi, ai}; zi) directly without computing an auxiliary reward (Mahajan et al., 2019; Osa et al., 2022). The loss function of the modified MI objective is
LMI(πA, ϕA) = − 1
2 2∑ i=1 Ezi,(oi,ai) log qϕA(zi|oi, πiA(·|oi, zi))) (9)
The objective of a policy πA in a population P becomes max πA,ϕA JLIPO(πA,P)− λMILMI(πA, ϕA) (10)
We set z as a discrete variable and use the uniform distribution for p(z1) and p(z2). At the beginning of each episode, each policy is given an independently sampled z that will be used until the end of the episode. We use MAPPO (Yu et al., 2021) for maximizing JSP and minimizing J̃XP. More details, including the pseudocode and the extension to more than two players, can be found in App. D.
4 EXPERIMENTS
We study the effectiveness of LIPO under three multi-goal cooperative environments in which both players must collectively choose to accomplish one of the available goals. We evaluate the diversity of a population based on the number of distinct goals achieved. We compare LIPO to other cooperative MARL methods that do not require domain knowledge to generate diverse agents. Our baselines are as follows: (i) Multi SP (multiple runs of self-play), (ii) SPMI (A single run of SP with added MI objective), (iii) MAVEN (Mahajan et al., 2019), and (iv) TrajeDi (Lupu et al., 2021). We also use Multi SPMI and Multi MAVEN as baselines by training SPMI and MAVEN multiple times. We also discuss on methods that utilize domain knowledge in Sec. 5.
4.1 DISCOVERING DIVERSE SOLUTIONS
We use two simple environments to study the effectiveness of various methods in discovering solutions: (i) One-Step Cooperative Matrix Game (CMG), in which there are many possible solutions, and (ii) Point Mass Rendezvous (PMR), a temporally extended cooperative navigation environment.
One-Step Cooperative Matrix Game (CMG): A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. By choosing the same solution, both players get a reward rm associated with the chosen solution. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H). For CMG-S, we set (M = 32, km = 8, rm = 0.5∗(1+ m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. An example payoff matrix is shown in Fig. 2a.
33300
33300
33300
00022
00022
0
0
0
0
0
000001
(a) (b) (c)
Figure 2: (a) The payoff matrix of a CMG game with (M = 3, km = m, rm = m). (b, c) The agents (orange) and landmark positions (blue) of PMR-C and PMR-L.
Point Mass Rendezvous (PMR): The environment is based on the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The goal of this environment is for the two agents to navigate to a landmark together. There are M = 4 landmarks, and we consider each landmark as a solution in this environment. This environment has two modes: PMR-C and PMR-L. In PMR-C, landmarks are distributed evenly on the circumference of a circle. Thus, all landmarks are equally easy to find and optimal. In PMR-L, landmarks are placed on a line. In this scenario, closer landmarks are easier to find. We define the population size |P| of each method as fol-
lows: For SPMI and MAVEN, |P| is equal to the number of dimensions of the latent variable, |z|. For Multi SPMI and Multi MAVEN, |P| = |z| · nseed where nseed is the number of random seeds and we use |z| = 8. For Multi SP, TrajeDi, and LIPO, |P| is the number of joint policies in the population. Results: Fig. 3 shows the numbers of learned solutions, averaged over three runs. In all environments, LIPO consistently discovers more solutions than the baselines, given the same population size. The baselines find fewer solutions in CMG-H and PMR-L than they do in CMG-S and PMR-C, whereas LIPO performs similarly across settings. LIPO is also better than the baselines at finding sub-optimal solutions in CMG-S. We note that Multi SP and TrajeDi perform almost ideally in PMR-C, where all solutions are equivalent, but perform worse in other settings. Also, Multi SPMI finds all four solutions in PMR when the population size is bigger than 8. However, it performs poorly in CMG. LIPO’s consistency across environments and settings demonstrates that LIPO is still effective when (i) many solutions exist, (ii) solutions are not equally optimal, and (iii) solutions are not equally likely to be found by random exploration. We also have experimented with stronger regularization coefficients for the baselines, which help the baselines discover more solutions. However, if the regularization coefficient is too large, they fail to produce capable policies.
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size D
is co
ve re
d s
o lu
ti o
ns
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
D is
co ve
re d
s o
lu ti
o ns
(a) CMG-S
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(b) CMG-H
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
l i I j i l i I l i
i
D is
co ve
re d
s o
lu ti
o ns
(c) PMR-C
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(d) PMR-L
Figure 3: Numbers of discovered solutions. Ideally, if the population size increases by one, one more solution should be discovered, as depicted by the dashed lines (assuming that a joint policy does not produce a multi-modal behavior).
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(a) PMR-C
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(b) PMR-L
Figure 4: Numbers of competent joint policies using various combinations of N (x-axis) and λXP (colors) in PMR.
4.2 TRADE-OFF BETWEEN COMPETENCY AND DISSIMILARITY OF JOINT POLICIES
It is possible that optimizing a regularized objective might incur training instability and create incapable policies. Here, we investigate the effect of different combinations of λXP and the population size (N ) on the competency of the policies.
Fig. 4 shows the number of competent joint policies when using different values of N and λXP in PMR. Particularly, in PMR, a joint policy is considered competent when both players stay close to a landmark at the end of an episode. We observe that when the population size is larger than the number of solutions (N > M ), some surplus policies do not learn to reach a goal. Importantly, the number of competent joint policies depends on the value of λXP: lower values of λXP yield more capable policies. However, using too low λXP will generate policies that share a common solution when N ≤ M as shown in App. J.1.1. Additionally, when N ≤ M , all trained agents are competent except when λXP is too high in PMR-L. These results suggest that there is a trade-off between the number of capable joint policies and policy dissimilarity. When using a larger population size, a small λXP should be used to avoid producing incompetent agents, while a bigger λXP should be used with smaller population sizes to ensure the dissimilarity between joint policies.
4.3 TRADE-OFF BETWEEN COMPUTATION COST AND DIVERSITY
Not only is using bigger values of N more likely to produce incompetent policies, but it is also computationally expensive. Formally, the computation complexity of approximating J̃XP(·,P) is O(Nnxp) where nxp is the number of XP pairs used to approximate J̃XP(πA,P). So, we investigate a way to reduce the cost of calculating J̃XP(πA,P) by reducing nxp. According to Eq. 5, the default value is nxp = N − 1. When nxp < N − 1, nxp policies are chosen randomly from P−A by sampling without replacement.
We observe that, while being computationally cheaper, using nxp < N − 1 tends to produce less diverse populations as shown in Fig. 5. Thus, nxp can be considered a hyperparameter
that controls the computation-diversity trade-off. However, as shown by the dashed lines, the effect of nxp on population diversity is less prominent in PMR-C, where solutions are equally likely to be found. We use nxp = N − 1 in all other experiments. See App. J.1.2 for results in CMG.
4.4 EFFECT OF THE MUTUAL INFORMATION OBJECTIVE
Fig. 6 shows the behaviors of the policies produced by LIPO with and without the MI objective in PMR-C. We can see the effect of the MI objective in the variety of the trajectories. Overall, each agent exhibits larger variations given a small MI regularization λMI = 0.5. This result aligns with our motivation of using the MI objective to learn variations of each solution. With or without the MI regularization, LIPO discovers all the landmarks with N = 4.
4.5 DISCOVERING RECIPES IN MULTI-RECIPE OVERCOOKED
Overcooked, a collaborative cooking game, has been used to study the cooperative ability of learned agents in prior works (Carroll et al., 2019; Charakorn et al., 2020; Strouse et al., 2021; McKee et al., 2022). To investigate the usefulness of LIPO in a high-dimensional environment, we implement a more complex version of the game based on the work of Wu et al. (2021); players have to complete and serve one of the six pre-defined recipes as fast as possible, as opposed to delivering a single menu item re-
peatedly. We emphasize that this environment is much more challenging than the ones in the previous experiments because of various aspects: First, it has a sparse reward signal. Second, there are multiple sub-tasks. Third, different recipes have different sub-tasks. Each of these characteristics of the environment complicates the process of finding diverse solutions. Furthermore, we note that recipes containing a carrot or a tomato are harder to complete than other recipes as they involve an additional coordination step. Particularly, carrot and tomato have to be sent over by the agent on the right, unlike lettuce and onion. Fig. 7 shows an overview of the game.
The goal in this experiment is to learn a population of behaviorally diverse agents. We choose to quantify the diversity of a population based on the entropy of its recipe distribution. For a population
P , we approximate the probability of recipe i being completed as P (recipei|P) ≈ ∑ πA mi(πA)∑
i ∑ πA mi(πA) ,
where mi(πA) denotes the frequency of recipe i under a joint policy πA. The recipe frequencies, {mi(πA)|1 ≤ i ≤ 6}, for each joint policy πA ∈ P are measured by counting the completed recipes from 1,000 self-play episodes. For Multi SP, TrajeDi and LIPO, we set N = 8. For Multi SPMI and Multi MAVEN, we use nseed = 8 and |z| = 8.
Results: Quantitatively, Tab. 1 shows that LIPO has the highest population recipe distribution entropy, averaged over five random seeds. This result indicates that LIPO populations use all recipes more uniformly than the baseline populations, even though some recipes take longer to complete or are harder to find by random exploration. The recipe distribution of populations produced by each method can be found in Fig. 8. We find that LIPO populations with λMI = 0 still, similar to λMI = 0.5, consistently learn to use the hard-to-find Tomato & Carrot Salad recipe. However, the frequencies of Chopped Tomato and Chopped Carrot are lowered (Fig. 8f). This means that there are multiple joint policies that learn to complete the same recipe while being incompatible with each other. We suspect that using λMI > 0 alleviates this problem by regularizing each joint policy to represent a policy with broader state-action coverage (e.g., learn multiple ways of completing a recipe), indirectly pushing other joint policies to use different recipes in order to be incompatible. Qualitatively, we can see in App. J.2 that the baselines produce agents with similar recipe frequencies. The resulting populations, thus, contain agents with a similar recipe preference. In contrast, LIPO produces agents with distinct recipe frequencies, collectively making the population more diverse than the baselines. We also visualize the behaviors learned by LIPO in App. J.3.
4.6 TRAINING GENERALIST AGENTS WITH GENERATED POPULATIONS
In addition to evaluating the diversity of agents in Overcooked, we quantify the usefulness of the produced agents by using them as training partners of a generalist agent and test the agent with held-out populations. Intuitively, more diverse training partners would enable the agent to generalize and coordinate with unseen agents better. Additionally, we include a population of six specialized SP policies where each policy is trained to complete a specific recipe by adjusting the reward function. The specialist population is created for evaluating the agent when the partner has a strong preference.
Fig. 9a shows that all generalist agents perform similarly when tested with held-out baseline populations. However, the agents trained with a baseline population perform poorly when matched with held-out LIPO and specialist populations. In contrast, those trained with a LIPO population perform better in both situations. Specifically, they have a significantly higher success rate when paired with the specialist with a strong preference for Tomato & Carrot Salad as shown in Fig. 9b. We attribute the success rate difference to the fact that this recipe has a lower completion probability in all except LIPO populations. As a result, generalist agents trained with a LIPO population perform better in terms of the overall success rate when tested with specialist agents. Overall, training with a LIPO population helps the generalist agents to better coordinate with more partner types as indicated by the harmonic means.
5 RELATED WORK
Learning a collection of diverse agents has been utilized in various contexts (Parker-Holder et al., 2020; Sun et al., 2020; Zahavy et al., 2021; Zhou et al., 2021). In the cooperative domain, Canaan et al. (2019; 2020) use the Quality Diversity (QD) algorithm (Mouret & Clune, 2015; Pugh et al., 2016) to produce a population of behaviorally diverse agents. QD, however, requires domain knowledge to encode different types of behaviors. For example, in CMG, the algorithm requires the mapping between actions and corresponding solutions. In Overcooked, it needs to know all possible recipes beforehand. Without such a domain knowledge, it would be difficult to use QD to produce a diverse population. TrajeDi (Lupu et al., 2021) produces a diverse population of agents based on the trajectory distribution. Finally, MEP (Zhao et al., 2021) trains a population of agents with an auxiliary reward based on population entropy. Like TrajeDi and MEP, ours does not require domain-specific knowledge. However, to promote behavioral diversity, LIPO utilizes the expected returns of different policy pairs as opposed to state-action information.
The idea of diversifying the empirical return has been explored in the context of finding diverse solutions in non-transitive competitive games (Liu et al., 2021; Balduzzi et al., 2019; Perez-Nieves et al., 2021). In particular, Liu et al. (2021) share some similar ideas with our work. They propose to use the expected returns, when encountering different opponents, and state-action information to promote diversity of agents. A concurrent work by Rahman et al. (2022) applies a similar idea of diversifying the expected joint return to generate diverse partners in cooperative settings. LIPO can be thought of as a special case designed specifically for cooperative environments (see App. E).
MI objectives have been used in RL to learn diverse behaviors (Eysenbach et al., 2018; Sharma et al., 2019; Kumar et al., 2020; Osa et al., 2022). In cooperative MARL, MAVEN (Mahajan et al., 2019) optimizes both RL and MI objectives to encourage the agents to explore in a committed manner and discover diverse solutions. Also, Any-play (Lucas & Allen, 2022) uses a similar objective to produce training partners with many solutions for a generalist agent. In contrast, our approach uses the MI objective to regularize each policy to learn local variations of each solution.
6 CONCLUSION
We propose LIPO, a simple and generic method that can create a population of diverse agents in cooperative multi-agent environments. Unlike previous work that uses state-action information from joint trajectories, LIPO utilizes the concept of policy compatibility to create diverse policies. This alternative view of quantifying diversity makes LIPO more robust to state and action spaces. Also, LIPO uses the MI objective to learn local variations of each solution. Empirically, LIPO consistently produces more diverse populations than the baselines across a variety of three multi-goal environments. Finally, in multi-recipe Overcooked, LIPO produces populations of diverse partners that help the generalist agents to generalize to unseen agents better. We include further discussions and limitations of LIPO in App. F and G.
ACKNOWLEDGEMENT
This work is partially supported by King Mongkut’s Institute of Technology Ladkrabang [2566-02- 06-002]. We thank Natchaya Sricom for drawing Fig. 1 and 7. We thank Supasorn Suwajanakorn, Sucha Supittayapornpong and Maytus Piriyajitakonkij for their suggestions on early draft versions. We also thank anonymous reviewers for their constructive feedbacks.
REPRODUCIBILITY STATEMENT
We have include additional information to reproduce the experimental results in the supplementary text:
• Environment details (App. C)
• Pseudocode and implementation details (App. D)
• Hyperparameters used in all experiments (App. H and I)
The source code is available at https://github.com/51616/marl-lipo.
A PROOF FOR THEOREM 3.1
We prove the relationship of similar policies and compatible policies in Theorem 3.1 under the following assumptions.
Assumption A.1 (All joint trajectories are supported by πB). P (τ |πiB , π j B) > 0; ∀τ ∈ T
Assumption A.2 (Shared ϵ). A common 0 ≤ ϵ ≤ 1 is used for Def. 2.1 and 2.2 Assumption A.3 (Positive return). G(τ) > 0;∀τ ∈ T Theorem. If πiA is similar to πiB , then πiA is compatible with πB .
Proof. Let r(τ) = P (τ |π i A,π j B)
P (τ |πiB ,π j B)
. Because πiA is similar to π i B , we know that 1− ϵ ≤ r(τ) ≤ 1+ ϵ; ∀τ
from Def. 2.1. Consequently, we can use importance sampling to write J(πiA, π j B) in relation to J(πiB , π j B):
J(πiA, π j B) = Eτ∼ρ(πiA,πjB)G(τ)
= ∫ τ P (τ |πiA, π j B)G(τ)
= ∫ τ P (τ |πiB , π j B) P (τ |πiB , π j B) P (τ |πiA, π j B)G(τ)
= ∫ τ r(τ)P (τ |πiB , π j B)G(τ) = Eτ∼ρ(πiB ,πjB)r(τ)G(τ)
Because 1 − ϵ ≤ r(τ) ≤ 1 + ϵ and G(τ) > 0;∀τ ∈ T , the expected return J(πiA, π j B) has the following upper and lower bounds:
(1− ϵ)Eτ∼ρ(πiB ,πjB)G(τ) ≤J(π i A, π j B) ≤ (1 + ϵ)Eτ∼ρ(πiB ,πjB)G(τ)
(1− ϵ)JSP(πB) ≤J(πiA, π j B) ≤ (1 + ϵ)JSP(πB)
This means that πiA is compatible with πB (Def. 2.2).
∴ If πiA is similar to π i B , then π i A is compatible with πB .
Remark: Assumption A.3 can be satisfied by offsetting the joint return of all trajectories such that minτ G(τ) > 0. However, since we use MAPPO as the base algorithm, the expected return is subtracted by a baseline to compute the advantage during the policy update, which removes the effect of the offset. In practice, even under environments that do not satisfy G(τ) > 0, LIPO can still discover diverse solutions effectively, as shown in the experiments.
B DERIVATION OF THE LOWER BOUND OF THE MI OBJECTIVE
We provide derivation of Eq. 7 here. Let p(zi|{oi, ai}) be the true posterior of zi and qϕ be the approximation of p parameterized by ϕ. The lower bound of I({oi, ai}|zi) can be derived as follows:
I({oi, ai}; zi) = H(zi)−H(zi|{oi, ai}) = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})− log qϕ(zi|{oi, ai}) + log qϕ(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log qϕ(zi|{oi, ai})] + KL(p(zi|{oi, zi})||qϕ(zi|{oi, zi}))
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|{oi, ai})]
C ADDITIONAL ENVIRONMENT DETAILS
C.1 ONE-STEP COOPERATIVE MATRIX GAME
A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. The game is stateless and terminate immediately after both players simultaneously choose an action. By choosing the same solution, both players get a reward rm associated with the chosen solution. This means that the solutions are not equally optimal if the values in {rm} are not identical. Similarly, if the values in {km} are not identical then the solutions are not equally likely to be chosen by a uniform joint policy. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H).
For CMG-S, we set (M = 32, km = 8, rm = 0.5 ∗ (1 + m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. There are 32 solutions, each with 8 compatible actions. There are 32× 8 = 256 possible actions for each player. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. There are 32 solutions, each with different number of compatible actions, ranging from 1 to 32. The number of available actions for each player is ∑m=32 m=1 m = 528.
C.2 POINT MASS RENDEZVOUS (PMR)
PMR is based on the the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The observation of each agent includes absolute position, current velocity, and the relative distance to the landmarks and the other agent. These features are concatenated as a 1-D vector of length 14. The possible actions are: no op, move, {up, down, left, right}. In PMR-C, the start positions of the agents are {(0.3,0), (-0.3,0)} and the landmarks positions are {(1.59, 1.59), (1.59, -1.59), (-1.59, 1.59), (-1.59, -1.59)}. For PMR-L, the start and the landmark positions are {(1,0),(0,1)} and {(0,2.25),(0,0.75),(0,-0.75),(0,-2.25)}. An episode will be terminated after 50 timesteps. The agents are incentivized to go to the same landmark and stay close together with the reward function
rt = 1− d(pi, c)−min l∈L d(l, c),
where d(·, ·) is the euclidean distance between two points, pi is the 2-d coordinate of agent i, c is the average coordinate of all agents, and L is the set of all landmarks.
C.3 MULTI-RECIPE OVERCOOKED
We implement a multi-recipe of the game based on the work of Wu et al. (2021). In this version of Overcooked, there are four ingredients: lettuce, onion, tomato, and carrot. The ingredients are randomly placed at pre-defined positions in the layout. Particularly, the lettuce and the onion are randomly placed on the left or the middle counter. The tomato and the carrot are randomly placed on the right or the middle counter. These ingredients can be composed into different recipes making each ingredient unique: four recipes (LettuceSalad, TomatoSalad, ChoppedCarrot, ChoppedOnion) require only a single ingredient, while the other two (TomatoLettuceSalad, TomatoCarrotSalad) require two ingredients. The ingredients have to be chopped at the chopping station before placing on the plate. After the required ingredients are put on the plate, they must be delivered to the delivery station.
Both players have the same egocentric observation and action spaces. The observation is a set of hand-crafted features that represent a local view of the environment. Specifically, we use the following features: absolute position and facing direction, relative distance to the objects and the other agent, state of the ingredients, four booleans indicating if the agent is next to a counter in four cardinal positions, currently held items, the state of the held foods, and the type and state of the items in front of the agent. These features are concatenated as a 1-D vector of length 54. At every timestep, each player has to choose one of the six possible actions: no op, move {up, down, left, right}, and interact. An episode lasts at most 200 timesteps and terminates immediately after a successful delivery. An episode without delivery is considered unsuccessful. We incentivize the agents to interact with the
objects and deliver as fast as possible with the following reward function:
rt = rinteract + rprogress + rcomplete − p,
here rinteract is a shaped reward given when an agent interacts with an object for the first time in an episode, rprogress is given when the players progress toward a recipe completion (i.e., chopping required ingredients or putting chopped ingredients on the plate), rcomplete is given upon successful delivery, and p is a penalty. We use rinteract = 0.5, rprogress = 1.0, rcomplete = 10, and p = 0.1. We note that recipes with more than one ingredient will give only slightly higher rewards (rinteract + rprogress) but are significantly harder to be discovered by random exploration than those with one ingredient.
Additional experimental details: For specialist agents, the rewards are given when interacting, progressing, or completing a specific recipe. For the held-out populations, we remove incompetent policies with the expected return of less than zero from the test populations created by TrajeDi and LIPO. We do not remove those in the training populations. We do this because testing with an incompetent policy does not give any meaningful information, as almost all episodes will be unsuccessful.
D IMPLEMENTATION DETAILS
Algorithm 1: Training process of LIPO (on-policy) This pseudocode is based on self-play. Blue text is related to the MI objective. LIPO specific code is highlighted in green. Input: A Population P = {πA |1 ≤ A ≤ N}, the number of XP pairs used to approximate J̃xp(πA,P)
(nxp), the number of players in an episode (m), and the number of SP and XP episodes per iteration (ESP and EXP).
while not done do for A ∈ {1, ..., N} do Bsp ← GetEpisodeRollouts(πA, πA, ESP,m) Compute JSP(πA) using Bsp Bxp ← GetCrossPlayRollouts(πA,P, nxp, EXP,m) Compute J̃XP(πA,P) using Bxp (Eq. 4) Compute LMI using Bsp and Bxp (Eq. 9) θA ← θA −∇θA [−JSP+λXPJ̃XP+λMILMI] ϕA ← ϕA − λMI∇ϕALMI
Algorithm 2: Rollout collection functions Function GetEpisodeRollouts(πA, πB , E): B ← {} for i ∈ {1, ...,m} do
for episode ∈ {1, .., E m } do
z1, ..., zm ∼ p(z1, ..., zm) zj ← { zk }m k ̸=i πj(·|·, zj) = Πk ̸=iπk(·|·, zk) τ ∼ ρ(πiA(·|·, zi), πjB(·|·, z
j)) B ← B ∪ {τ}
return B Function GetCrossPlayRollouts(πA,P, nxp, EXP): Bxp ← {} P ′−A ← SampleWithoutReplacement(P−A, nxp) for πB ∈ P ′−A do B ← GetEpisodeRollouts(πA, πB , EXP|P′−A|) Bxp ← Bxp ∪ B
return Bxp
The pseudocode for LIPO is shown in Algorithm 1 and 2. If there are more than two players (m > 2), πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ
k(akt |τkt ). We note that scaling LIPO to more than two players does not increase the training time. It is the same as the two-player setting as long as the numbers of SP and XP episodes are the same. In practice,
however, more XP episodes might be needed to better estimate J̃XP. We use the parameter sharing technique for better sample efficiency and faster convergence (Tan, 1993; Foerster et al., 2018; Rashid et al., 2018). Assuming that a policy πiA is a neural network parameterized by θ i A, this means that for a joint policy (π1A, π 2 A), we have θ 1 A = θ 2 A. Still, π 1 A and π 2 A can behave differently as they observe different parts of the environment and have a different player indicator concatenated with their local observations.
All methods are implemented on top of MAPPO except MAVEN. The critic, policy, and discriminator are feed-forward neural networks with two hidden layers, each having 64 units. For a fair comparison, we use the same or more environment steps in the policy update of the baselines compared to LIPO. Common hyperparameters of methods based on MAPPO are shown in Table. 2.
D.1 MAPPO
MAPPO is the base MARL algorithm for all baselines except MAVEN. The policy parameters are shared among all policies. The critic takes a state of the environment and outputs an expected return of a given global state. The global state is provided by the environment and only used during training. For the complete training objectives of MAPPO, we refer the reader to Appendix A of Yu et al. (2021).
D.2 MULTI SP
A simple but effective way to produce diverse agents by training multiple SP agents with different neural network initializations and random seeds. Specifically, each run produces a joint policy πA that maximizes JSP(πA) using MAPPO.
D.3 SPMI
A single run of SP agent trained with added MI objective I(z|oi, ai). SPMI uses a shared z for both policies and considers each z as a different joint policy. We train SPMI using the same training procedure as LIPO by setting N = 1, λXP = 0 and z1 = z2. The discriminator takes a local
observation oi and action distribution πi(·|oi) as inputs and outputs the discrete probability of the latent variable. The latent variable of all policies is shared during an episode.
D.4 MAVEN
MAVEN (Mahajan et al., 2019) is explicitly designed for learning diverse solutions in cooperative multi-agent environments. A joint policy is represented as π(·|τ, z), and each mode of behavior is represented by the latent variable z. Similar to SPMI, MAVEN uses a shared z for all policies. We use the same network architecture presented in Mahajan et al. (2019) with recurrent neural networks. However, we do not use the hierarchical policy but sample z from the uniform distribution. The latent variable of all policies is shared.
D.5 MULTI SPMI AND MULTI MAVEN
A population containing joint policies from multiple runs of SPMI and MAVEN. Like Multi SP, each run has different neural network initializations and random seeds. The population size is |P| = nseed|z|, where nseed is the number of runs. Notably, this baseline uses the training data differently from the base algorithms. Instead of training a long single run, this approach allows the policy to ”restart” by using different initialization of neural networks. For example, training a single run with |z| = N,nseed = 1 might not discover as many solutions as training nseed runs with |z| = Nnseed even though the population size is the same. Empirically, we find that multiple shorter runs can find more solutions compared to a single long run of the corresponding algorithm. Thus, we omit the results of the base algorithms in multi-recipe Overcooked.
D.6 TRAJEDI
TrajedDi produces a population of diverse agents that also maximize the expected return in cooperative environments. The diversity measure of this method is based on the Jensen-Shannon divergence (JSD) between the trajectory distribution of each policy. Different from the original implementation, we remove the best-response (BR) policy from the population. Since the BR policy might work well with only a subset of solutions, removing BR potentially increase the number of variations in the population. Our modified loss is:
L = −[ N∑
A=1
(JSP(πA)) + αJSDγ(π1, ..., πN )], (11)
where JSD is the proposed diversity objective of TrajeDi, and α and γ are the hyperparameters of TrajeDi.
D.7 LIPO
LIPO uses the same implementation as SPMI except LIPO uses independent latent variable for each policy and λXP > 0. Additionally, LIPO uses extra critics for the XP trajectories. In total, LIPO has an SP critic V πAsp and N − 1 XP critics {V πA,πBxp | πB ∈ P−A}. In each training iteration, LIPO collects SP and XP trajectories of all policy combinations. MAPPO is used for both maximizing JSP and minimizing J̃XP. The critics are trained using SP trajectories and all of XP trajectories, while the policy is trained using SP trajectories and XP trajectories from the XP pair that has the highest joint return, max(Bxp).
D.8 GENERALIST AGENT
The policy and critic networks of a generalist agent use two 256-unit GRU layers (Cho et al., 2014) followed by a linear layer. The input also includes the reward and action of the previous timestep. We use MAPPO for training a generalist agent. We train both the policy and critic with the batch size of 320,000 timesteps using truncated backpropagation through time (BPTT). The samples are reused for 15 epochs. Each minibatch contains 1,600 sequences with a maximum length of 50 timesteps. We also use learning rate annealing, specifically the generalist agent starting from 0.005 to 0.003 with linear scheduling. Other hyperparameters are shared with other methods (Tab. 2). A training
partner for a generalist agent is sampled uniformly from the training population at the beginning of an episode.
E RELATIONSHIP WITH RAHMAN ET AL. (2022)
Rahman et al. (2022) propose to optimize the self-play returns while maximizing the diversity term Div(C), where C is a N ×N cross-play payoff matrix. Specifically, they propose to learn diverse policies via the following objective:
max C Tr(C) + Div(C) (12)
They propose to use Div(C) = Det(κ(C)) where κ(C) is an N × N matrix with κi,j(C) being similarity between policy πi and πj . The radial basis function (RBF) kernel of the empirical returns is used to measure the similarity between two policies:
κi,j(C) = exp(− ||Ci,· − Cj,·||2
σ2 ), (13)
where Ci,· is row ith of C. In other words, Ci,· is the vector containing the empirical return of πi when matched with other policies in the population. Intuitively, this objective diversifies the policies via the expected returns similar to LIPO. Using the same notation of the cross-play matrix, we can write the objective of training a LIPO population as:
max C Tr(C)− λXP ∑ i max 1≤j≤N
i ̸=j
Ci,j (14)
This objective wants the diagonal (JSP) to be maximized and the off-diagonal entries of the cross-play matrix (J̃XP) to be minimized. This is a special case of Eq. 12 where Div(C) is based on policy compatibility.
F DISCUSSIONS
Agents trained with LIPO are incentivized to act adversarially toward agents that behave differently from itself. This behavior might not be desirable for certain downstream tasks. For example, agents produced by LIPO might not be suitable for interacting with humans as they would refuse to conform with the user. However, as shown in Sec. 4.6, training a generalist agent with these agents would have the opposite effect: the generalist agent would try to comply with the current partner’s preference.
LIPO produces a population of near-optimal solutions, a generalist agent trained with a LIPO population might not coordinate well with significantly sub-optimal agents. In a prior work, Strouse et al. (2021) show that augmenting the training population with past checkpoints (FCP) helps the trained generalist agent to effectively coordinate with sub-optimal agents. Since LIPO and FCP are orthogonal, populations created by LIPO can be also augmented in the same way as FCP.
Previous works find incompatible policies to be undesirable since they are generally results of coordinated symmetry breaking (Bard et al., 2020; Hu et al., 2020; 2021); these policies perform poorly when interacting with unseen partners. However, we show that learning incompatible policies can be useful for generating behaviorally diverse agents in various scenarios. The produced agents can then be used as training partners for a generalist agent. Using LIPO in environments where many solutions are equivalent may produce such undesirable symmetry-breaking conventions. We believe that LIPO can be combined with other techniques, e.g., other-play (Hu et al., 2020) and equivariant coordinator (Muglich et al., 2022), to avoid learning arbitrary symmetry-breaking. We leave the study of the combination of LIPO and these techniques for future work. A concurrent work by Cui et al. (2023) proposes an extension of LIPO by combining insights from off-belief learning (Hu et al., 2021) to avoid ”sabotaging” behavior of LIPO agents.
G LIMITATIONS
LIPO requires an additional hyperparameter λXP. If λXP is too big, it is possible that the main RL objective, JSP, would be interfered which will result in an incompetent joint policy (Sec. 4.2). An
adaptive mechanism that selects a suitable value for λXP at different stages of training could help increase training stability.
Although LIPO can be fully parallelized, it requires more computation than the baselines to get an accurate approximation of J̃XP, which makes it harder to scale up to bigger population sizes. Instead of collecting all policy pairs, sampling a portion of policy pairs to approximate J̃XP could reduce computation cost and training time at a potential cost of diversity (Sec. 4.3). Instead of using a uniform sampling, a mechanism that selects the best pair to sample (e.g., bandit algorithm) might help mitigate the diversity loss from using lower nxp.
In this work, we only investigate the effectiveness of LIPO under a new set of cooperative environments with multiple discrete solutions. Investigating LIPO in other well-known environments where the solution space might be continuous, e.g., continuous control (Peng et al., 2021), or environments with only a single goal, e.g., SMAC (Samvelyan et al., 2019), might yield interesting results and pose different challenges. We leave this further investigation as our future work.
H HYPERPARAMETERS (CMG AND PMR)
We provide the searched values of each method in Tab. 3, 4, 5, 6, 7 and 8. The hyperparameters are searched individually for each population size. We use three random seeds for each set of hyperparameters. We do not use any validation method. Instead, we present the results using the best parameters in the main paper. For LIPO, we set λMI as 0.0 and 0.5 in CMG and PMR, respectively.
I HYPERPARAMETERS (OVERCOOKED)
For each method, we use the parameters that give the highest entropy to generate five populations for Sec. 4.5, and Sec. 4.6. The searched values of each method are:
• TrajeDi: We perform a grid search with following hyperparameters: α ∈ {5, 10} and γ ∈ {0, 0.5}. We use α = 5 and γ = 0.5 for the results in the paper.
• Multi SPMI: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• Multi MAVEN: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• LIPO: We perform a grid search with following hyperparameters: λXP ∈ {0.2, 0.3} and λMI ∈ {0.1, 0.5}. We use λXP = 0.3 and λMI = 0.5 for the results in the paper.
J ADDITIONAL RESULTS
J.1 ADDITIONAL ABLATION RESULTS
We provide additional results of numbers of learned solutions with varying N and λXP (Fig. 10), numbers of competent agents in CMG with varying N and λXP (Fig. 11), and numbers of learned solutions with varying nxp in CMG (Fig. 12) here. These results are consistent with the analysis presented in Sec. 4.2 and 4.3.
J.1.1 ADDITIONAL RESULTS WITH VARYING N AND λXP
J.1.2 ADDITIONAL RESULTS WITH VARYING nxp
J.2 RADAR PLOTS OF RECIPE FREQUENCIES
Fig. 13 shows recipe frequencies of the population with highest recipe entropy (out of five runs) from each method. The frequencies are calculated based on completed recipe from 1,000 self-play episodes of each agent as described in Sec. 4.5. Qualitatively, we can see that agents in a LIPO
population have more distinct recipe frequencies. That is, agents are different from each other in terms of recipe preference.
J.3 VISUALIZATION OF BEHAVIORS
We visualize the behaviors joint policies produced by LIPO in PMR and Overcooked at https: //sites.google.com/view/iclr-lipo-2023. Here, we show snapshots of four joint policies that have a distinct recipe preference in Overcooked. | 1. What is the main contribution of the paper, and how does it differ from other methods in the field?
2. What are the strengths and weaknesses of the proposed algorithm, particularly regarding its ability to generate diverse policies?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the experimental settings, figures, or related work discussion? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper introduces the LIPO (Learning Incompatible Policies) algorithm for generating diverse policies via a mutual information objective. This is then demonstrated empirically in two simple environments (a cooperative matrix game and a 2D navigation task), as well as in the Overcooked domain.
Strengths And Weaknesses
Strengths
While conceptually simple, the proposed framework for generating diverse populations of agents is a powerful idea that could have applicability beyond cooperative domains, e.g., for designing agents that can compete effectively against agents or where optimal strategies are difficult to find.
The inclusion of the anonymized source code is appreciated, and all of the details in the appendices aid reproducibility.
The paper is written clearly and includes solid theoretical justification for the implementation of an additional MI objective.
Weaknesses
The related work section could be earlier in the paper, and possibly compared with LIPO in more detail. For example, what would the "domain knowledge" required by QD look like in the experimental settings described here?
Figure 8 could include more context; is the goal to generate something that looks like the uniform distribution? Or are some choices clearly better than others in this environment? Essentially, interpreting this requires additional background knowledge about Overcooked and about what an ideal solution would look like.
Clarity, Quality, Novelty And Reproducibility
Generally, the paper is very clear and, as mentioned above, the reproducibility of the demonstrated results is high. Since this framework does not specifically require domain knowledge, it can be broadly applicable to multiple unknown settings, where domain knowledge is difficult to obtain. It would be useful to compare LIPO further with MEP (Zhao et al. 2021) to discuss the qualitative/quantitative differences in policies that both obtain.
Line 1 of Appendix A: "proof" should be "prove" |
ICLR | Title
Generating Diverse Cooperative Agents by Learning Incompatible Policies
Abstract
Training a robust cooperative agent requires diverse partner agents. However, obtaining those agents is difficult. Previous works aim to learn diverse behaviors by changing the state-action distribution of agents. But, without information about the task’s goal, the diversified agents are not guided to find other important, albeit sub-optimal, solutions: the agents might learn only variations of the same solution. In this work, we propose to learn diverse behaviors via policy compatibility. Conceptually, policy compatibility measures whether policies of interest can coordinate effectively. We theoretically show that incompatible policies are not similar. Thus, policy compatibility—which has been used exclusively as a measure of robustness—can be used as a proxy for learning diverse behaviors. Then, we incorporate the proposed objective into a population-based training scheme to allow concurrent training of multiple agents. Additionally, we use state-action information to induce local variations of each policy. Empirically, the proposed method consistently discovers more solutions than baseline methods across various multi-goal cooperative environments. Finally, in multi-recipe Overcooked, we show that our method produces populations of behaviorally diverse agents, which enables generalist agents trained with such a population to be more robust.
1 INTRODUCTION
Cooperating with unseen agents (e.g., humans) in multi-agent systems is a challenging problem. Current state-of-the-art cooperative multi-agent reinforcement learning (MARL) techniques can produce highly competent agents in cooperative environments (Kuba et al., 2021; Yu et al., 2021). However, those agents are often overfitted to their training partners and cannot coordinate with unseen agents effectively (Carroll et al., 2019; Bard et al., 2020; Hu et al., 2020; Mahajan et al., 2022).
The problem of working with unseen partners, i.e., ad-hoc teamwork problem (Stone et al., 2010), has been tackled in many different ways (Albrecht & Stone, 2018; Carroll et al., 2019; Shih et al., 2020; Gu et al., 2021; Rahman et al., 2021; Zintgraf et al., 2021; He et al., 2022; Mirsky et al., 2022; Parekh et al., 2022). These methods allow an agent to learn how to coordinate with unseen agents and, sometimes, humans. However, the success of these methods depends on the quality of training partners; it has been shown that the diversity of training partners is crucial to the generalization of the agent (Charakorn et al., 2021; Knott et al., 2021; Strouse et al., 2021; McKee et al., 2022; Muglich et al., 2022). In spite of its importance, obtaining a diverse set of partners is still an open problem.
The simplest way to generate training partners is to use hand-crafted policies (Ghosh et al., 2020; Xie et al., 2021; Wang et al., 2022), domain-specific reward shaping (Leibo et al., 2021; Tang et al., 2021; Yu et al., 2023), or multiple runs of the self-play training process (Grover et al., 2018; Strouse et al., 2021). These methods, however, are not scalable nor guaranteed to produce diverse behaviors. Prior works propose techniques aiming to generate diverse agents by changing the state visitation and action distributions (Lucas & Allen, 2022), or joint trajectory distribution of the agents (Mahajan et al., 2019; Lupu et al., 2021). However, as discussed by Lupu et al. (2021), there is a potential drawback of using such information from trajectories to diversify the behaviors. Specifically, agents that make locally different decisions do not necessarily exhibit different high-level behaviors.
To avoid this potential pitfall, we propose an alternative approach for learning diverse behaviors using information about the task’s objective via the expected return. In contrast to previous works that
use joint trajectory distribution to represent behavior, we use policy compatibility instead. Because cooperative environments commonly require all agents to coordinate on the same solution, if the agents have learned different solutions, they cannot coordinate effectively and, thus, are incompatible. Consequently, if an agent discovers a solution that is incompatible with all other agents in a population, then the solution must be unique relative to the population. Based on this reasoning, we introduce a simple but effective training objective that regularizes agents in a population to find solutions that are compatible with their partner agents but incompatible with others in the population. We call this method “Learning Incompatible Policies” (LIPO).
We theoretically show that optimizing the proposed objective will yield a distinct policy. Then, we extend the objective to a population-based training scheme that allows concurrent training of multiple policies. Additionally, we utilize a mutual information (MI) objective to diversify local behaviors of each policy. Empirically, without using any domain knowledge, LIPO can discover more solutions than previous methods under various multi-goal settings. To further study the effectiveness of LIPO in a complex environment, we present a multi-recipe variant of Overcooked and show that LIPO produces behaviorally diverse agents that prefer to complete different cooking recipes. Experimental results across three environments suggest that LIPO is robust to the state and action spaces, the reward structure, and the number of possible solutions. Finally, we find that training generalist agents with a diverse population produced by LIPO yields more robust agents than training with a less diverse baseline population. See our project page at https://bit.ly/marl-lipo
2 PRELIMINARIES
Our main focus lies in fully cooperative environments modeled as decentralized partially observable Markov decision processes (Dec-POMDP, Bernstein et al. (2002)). In this work, we start our investigation in the two-player variant. A two-player Dec-POMDP is defined by a tuple (S,A1,A2,Ω1,Ω2, T,O, r, γ,H), where S is the state space, A ≡ A1 ×A2 and Ω ≡ Ω1 × Ω2 are the joint-action and joint-observation spaces of player 1 and player 2. The transition probability from state s to s′ after taking a joint action (a1, a2) is given by T (s′|s, a1, a2). O(o1, o2|s) is the conditional probability of observing a joint observation (o1, o2) under state s. All players share a common reward function r(s, a1, a2), γ is the reward discount factor and H is the horizon length.
Players, with potentially different observation and action spaces, are controlled by policy π1 and π2. At each timestep t, the players observe ot = (o1t , o 2 t ) ∼ O(o1t , o2t |st) under state st ∈ S and produce a joint action at = (a1t , a 2 t ) ∈ A sampled from the joint policy π(at|τt) = π1(a1t |τ1t )π2(a2t |τ2t ), where τ1t and τ 2 t contain a trajectory history until timestep t from the perspective of each agent. All players receive a shared reward rt = r(st, a1t , a 2 t ). The return of a joint trajectory
τ = (o0, a0, r0, ..., rH−1, oH) ∈ T ≡ (Ω × A × R)H can be written as G(τ) = ∑H t=0 γ trt. The expected return of a joint policy (π1, π2) is J(π1, π2) = Eτ∼ρ(π1,π2)G(τ), where ρ(π1, π2) is the distribution over trajectories of the joint policy (π1, π2) and P (τ |π1, π2) is the probability of τ being sampled from a joint policy (π1, π2).
We use subscripts to denote different joint policies and superscripts to refer to different player roles. For example, πA = (π1A, π 2 A) is a different joint policy from πB = (π 1 B , π 2 B), and π i A and πjA are policies of different roles. 1 Finally, we denote the expected joint return of self-play (SP) trajectories—where both policies are part of the same joint policy, πA—as JSP(πA) := J(π1A, π 2 A) and the expected joint return of cross-play (XP) trajectories—where policies are chosen from different joint policies, πA and πB—as JXP(πA, πB) := J(π1A, π 2 B) + J(π 1 B , π 2 A).
Since we are interested in creating distinct policies for any Dec-POMDP, we need an environmentagnostic measure that captures the similarity of policies. First, we consider a measure that can compute the similarity between policies of the same role i, e.g., πiA and π i B . We can measure this with the probability of a joint trajectory τ produced by either πiA or π i B . However, in the two-player setting, we need to pair these policies with a reference policy πjref. Specifically, π i A and π i B are considered similar if they are likely to produce the same trajectories when paired with an arbitrary reference policy πjref. We define similar policies as follows:
1Note that LIPO can be applied to environments with more than two players with a slight modification. Specifically, a policy πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ k(akt |τkt ).
Definition 2.1 (Similar policies). Considering two policies of the same role i, πiA and πiB , and a reference policy πjref of a different role j, π i A is similar to π i B up to ϵ if and only if maxτ∈T |1 − P (τ |πiA,π j ref) P (τ |πiB ,π j ref) | ≤ ϵ, where 0 ≤ ϵ ≤ 1.
Next, we consider an alternate view on assessing the similarity between policies using policy compatibility (Section 3). Policy compatibility measures the performance difference of a joint policy πB before and after one of its policies πiB is substituted by another policy π i A. We define compatibility between a policy πiA and a joint policy πB as follows:
Definition 2.2 (Compatible policies). Given a policy πiA and a joint policy πB , πiA is compatible with πB if and only if J(πiA, π j B) ≥ (1− ϵ)JSP(πB).
3 LEARNING INCOMPATIBLE POLICIES (LIPO)
Our goal is to create distinct policies and, therefore, a population of diverse agents. First, we theoretically show that policy compatibility can be used to identify whether two policies are different. Based on this observation, we propose a novel training objective that produces a distinct policy. Then, we extend this objective for training a population of diverse policies. Finally, we incorporate an MI objective that encourages each policy to learn local variations.
3.1 LEARNING A DISTINCT POLICY VIA POLICY COMPATIBILITY
In this section, we motivate our objective by looking at two joint policies: πA = (π1A, π 2 A) and πB = (π1B , π 2 B). The goal is for πA to learn a different behavior from πB via the compatibility criterion. Importantly, the compatibility criterion can be computed empirically without direct access to the trajectory distribution, which can be difficult to estimate. Under mild assumptions, we can simplify the setting such that a simple relationship between similarity measure and compatibility criterion emerges. By reasoning about the expected return under different pairs of policies, we derive our main result.
Theorem 3.1. If πiA is similar to πiB , then πiA is compatible with πB . (The proof is in App. A.) Corollary 3.2. If πiA is not compatible with πB , then πiA is not similar to πiB .
The result from Corollary 3.2 shows that we can find a policy πiA that is not similar to π i B by decreasing its compatibility with πB until they are incompatible, i.e., J(πiA, π j B) < (1− ϵ)JSP(πB). Additionally, we can ensure that πA learns a meaningful solution by maximizing JSP(πA). Assuming that πB has learned a solution and is fixed, the optimization objective of πA can be written as
max πA
JSP(πA) subject to J(πiA, π j B) < (1− ϵ)JSP(πB) ∀i, j ∈ {1, 2}, i ̸= j (1)
A way to solve such a constrained problem is to convert the constraints into regularization terms. For simplicity, we use a common λXP > 0 as a hyperparameter for the constraints. Then, we can write the soft objective of Eq. 1 as
max πA
JSP(πA)− λXPJXP(πA, πB) (2)
3.2 LEARNING A POPULATION OF DIVERSE POLICIES
To create a population of N diverse policies, P = {πA|1 ≤ A ≤ N}, we need an objective that requires each member of the population to have a different behavior relative to the rest of the population. We can write such an objective by expanding the XP term in Eq. (2) to include all other policies in the population. Additionally, we relax the assumption that other policies are fixed to
allow concurrent training of all policies. For a policy πA ∈ P , with an aggregation function fagg, its objective becomes
max πA
JLIPO(πA,P) = JSP(πA)− λXPJ̃XP(πA,P), (3)
where J̃XP(πA,P) = fagg(BxpA ), (4) BxpA = {JXP(πA, πB) | πB ∈ P−A}, (5) P−A = P\{πA} (6)
While using the average operation as the aggregation function is plausible, we find that using the max operation helps stabilize the training process and produces more diverse policies. We suspect that the average operation might produce many conflicting gradients and does not prioritize compatible XP pairs. We refer to JLIPO as the compatibility gap between a policy πA and a population P .
We can see that the compatibility gap objective only uses the expected return (JSP and J̃XP) and is insensitive to the state and action information. We argue that this distinction between LIPO and previous methods helps the agents discover more solutions in various situations (Sec. 4.1 and 4.5).
3.3 INDUCING VARIATIONS IN EACH POLICY
It is important to note that, regardless of the population size, there could be policies of role i that are compatible with πA ∈ P but not similar to πiA. We consider those policies to be variations of πiA and propose to capture such variations via an MI objective. Specifically, we condition π i A on a latent variable zi such that πA has the form of πA(a|τ) = E(z1,z2)π1A(a1|τ1, z1)π2A(a2|τ2, z2) where p(z1, z2) is a pre-defined prior distribution. We can induce variations of πiA by maximizing I({oi, ai}; zi), where I(·; ·) is the MI between two random variables. Intuitively, this objective encourages each policy to observe different observations and perform different actions given different values of the latent variable. However, maximizing I({oi, ai}; zi) directly is intractable, instead we optimize the variational lower bound of the MI (Jordan et al., 1999) (see App. B for the derivation)
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|oi, ai)], (7) where qϕA(z
i|oi, ai) is an approximation of the true posterior p(zi|oi, ai) parameterized by ϕA. So, maximizing I({o1, a1}; z1) and I({o2, a2}; z2) is an optimization problem that can be written as
max πA,ϕA
1
2 2∑ i=1 H(zi) + Ezi,(oi,ai) log qϕA(zi|oi, ai) (8)
In the previous work (Mahajan et al., 2019), shared z (i.e., z1 = z2) is used allowing both policies to collectively switch between different modes of behavior. However, LIPO uses independently sampled z as it utilizes z for a different purpose. Specifically, LIPO maximizes JLIPO to learn diverse solutions and optimizes the MI objective to learn variations of each solution. That is, the MI objective does not directly impact the diversity between different policies but increases variations of each individual policy. We note that the MI objective is optional; we show that without the MI objective, LIPO still produces diverse policies (Sec. 4.4).
3.4 IMPLEMENTATION
In practice, we modify the MI objective (Eq. 8) to be differentiable with respect to the policy πiA. Specifically, the variational posterior qϕA is modified such that, instead of a sampled action a
i, it takes the action distribution πiA(·|oi, zi) as an input, i.e., qϕA(z|o, πiA(·|oi, zi)). In contrast to previous MI-based approaches (Eysenbach et al., 2018; Sharma et al., 2019; Jiang & Lu, 2021; Lucas & Allen, 2022), we can optimize I({oi, ai}; zi) directly without computing an auxiliary reward (Mahajan et al., 2019; Osa et al., 2022). The loss function of the modified MI objective is
LMI(πA, ϕA) = − 1
2 2∑ i=1 Ezi,(oi,ai) log qϕA(zi|oi, πiA(·|oi, zi))) (9)
The objective of a policy πA in a population P becomes max πA,ϕA JLIPO(πA,P)− λMILMI(πA, ϕA) (10)
We set z as a discrete variable and use the uniform distribution for p(z1) and p(z2). At the beginning of each episode, each policy is given an independently sampled z that will be used until the end of the episode. We use MAPPO (Yu et al., 2021) for maximizing JSP and minimizing J̃XP. More details, including the pseudocode and the extension to more than two players, can be found in App. D.
4 EXPERIMENTS
We study the effectiveness of LIPO under three multi-goal cooperative environments in which both players must collectively choose to accomplish one of the available goals. We evaluate the diversity of a population based on the number of distinct goals achieved. We compare LIPO to other cooperative MARL methods that do not require domain knowledge to generate diverse agents. Our baselines are as follows: (i) Multi SP (multiple runs of self-play), (ii) SPMI (A single run of SP with added MI objective), (iii) MAVEN (Mahajan et al., 2019), and (iv) TrajeDi (Lupu et al., 2021). We also use Multi SPMI and Multi MAVEN as baselines by training SPMI and MAVEN multiple times. We also discuss on methods that utilize domain knowledge in Sec. 5.
4.1 DISCOVERING DIVERSE SOLUTIONS
We use two simple environments to study the effectiveness of various methods in discovering solutions: (i) One-Step Cooperative Matrix Game (CMG), in which there are many possible solutions, and (ii) Point Mass Rendezvous (PMR), a temporally extended cooperative navigation environment.
One-Step Cooperative Matrix Game (CMG): A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. By choosing the same solution, both players get a reward rm associated with the chosen solution. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H). For CMG-S, we set (M = 32, km = 8, rm = 0.5∗(1+ m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. An example payoff matrix is shown in Fig. 2a.
33300
33300
33300
00022
00022
0
0
0
0
0
000001
(a) (b) (c)
Figure 2: (a) The payoff matrix of a CMG game with (M = 3, km = m, rm = m). (b, c) The agents (orange) and landmark positions (blue) of PMR-C and PMR-L.
Point Mass Rendezvous (PMR): The environment is based on the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The goal of this environment is for the two agents to navigate to a landmark together. There are M = 4 landmarks, and we consider each landmark as a solution in this environment. This environment has two modes: PMR-C and PMR-L. In PMR-C, landmarks are distributed evenly on the circumference of a circle. Thus, all landmarks are equally easy to find and optimal. In PMR-L, landmarks are placed on a line. In this scenario, closer landmarks are easier to find. We define the population size |P| of each method as fol-
lows: For SPMI and MAVEN, |P| is equal to the number of dimensions of the latent variable, |z|. For Multi SPMI and Multi MAVEN, |P| = |z| · nseed where nseed is the number of random seeds and we use |z| = 8. For Multi SP, TrajeDi, and LIPO, |P| is the number of joint policies in the population. Results: Fig. 3 shows the numbers of learned solutions, averaged over three runs. In all environments, LIPO consistently discovers more solutions than the baselines, given the same population size. The baselines find fewer solutions in CMG-H and PMR-L than they do in CMG-S and PMR-C, whereas LIPO performs similarly across settings. LIPO is also better than the baselines at finding sub-optimal solutions in CMG-S. We note that Multi SP and TrajeDi perform almost ideally in PMR-C, where all solutions are equivalent, but perform worse in other settings. Also, Multi SPMI finds all four solutions in PMR when the population size is bigger than 8. However, it performs poorly in CMG. LIPO’s consistency across environments and settings demonstrates that LIPO is still effective when (i) many solutions exist, (ii) solutions are not equally optimal, and (iii) solutions are not equally likely to be found by random exploration. We also have experimented with stronger regularization coefficients for the baselines, which help the baselines discover more solutions. However, if the regularization coefficient is too large, they fail to produce capable policies.
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size D
is co
ve re
d s
o lu
ti o
ns
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
D is
co ve
re d
s o
lu ti
o ns
(a) CMG-S
8 16 32 64
0
8
16
32
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(b) CMG-H
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
l i I j i l i I l i
i
D is
co ve
re d
s o
lu ti
o ns
(c) PMR-C
12 4 8 16 32
1
2
3
4
Multi SP SP MI MAVEN TrajeDi Multi SP MI Multi MAVEN LIPO Ideal
Population size
D is
co ve
re d
s o
lu ti
o ns
(d) PMR-L
Figure 3: Numbers of discovered solutions. Ideally, if the population size increases by one, one more solution should be discovered, as depicted by the dashed lines (assuming that a joint policy does not produce a multi-modal behavior).
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(a) PMR-C
1 2 4 8
1
2
4
8
1 0.5 0.25 0.1 Ideal
Population size
C o
m p
et en
t ag
en ts
(b) PMR-L
Figure 4: Numbers of competent joint policies using various combinations of N (x-axis) and λXP (colors) in PMR.
4.2 TRADE-OFF BETWEEN COMPETENCY AND DISSIMILARITY OF JOINT POLICIES
It is possible that optimizing a regularized objective might incur training instability and create incapable policies. Here, we investigate the effect of different combinations of λXP and the population size (N ) on the competency of the policies.
Fig. 4 shows the number of competent joint policies when using different values of N and λXP in PMR. Particularly, in PMR, a joint policy is considered competent when both players stay close to a landmark at the end of an episode. We observe that when the population size is larger than the number of solutions (N > M ), some surplus policies do not learn to reach a goal. Importantly, the number of competent joint policies depends on the value of λXP: lower values of λXP yield more capable policies. However, using too low λXP will generate policies that share a common solution when N ≤ M as shown in App. J.1.1. Additionally, when N ≤ M , all trained agents are competent except when λXP is too high in PMR-L. These results suggest that there is a trade-off between the number of capable joint policies and policy dissimilarity. When using a larger population size, a small λXP should be used to avoid producing incompetent agents, while a bigger λXP should be used with smaller population sizes to ensure the dissimilarity between joint policies.
4.3 TRADE-OFF BETWEEN COMPUTATION COST AND DIVERSITY
Not only is using bigger values of N more likely to produce incompetent policies, but it is also computationally expensive. Formally, the computation complexity of approximating J̃XP(·,P) is O(Nnxp) where nxp is the number of XP pairs used to approximate J̃XP(πA,P). So, we investigate a way to reduce the cost of calculating J̃XP(πA,P) by reducing nxp. According to Eq. 5, the default value is nxp = N − 1. When nxp < N − 1, nxp policies are chosen randomly from P−A by sampling without replacement.
We observe that, while being computationally cheaper, using nxp < N − 1 tends to produce less diverse populations as shown in Fig. 5. Thus, nxp can be considered a hyperparameter
that controls the computation-diversity trade-off. However, as shown by the dashed lines, the effect of nxp on population diversity is less prominent in PMR-C, where solutions are equally likely to be found. We use nxp = N − 1 in all other experiments. See App. J.1.2 for results in CMG.
4.4 EFFECT OF THE MUTUAL INFORMATION OBJECTIVE
Fig. 6 shows the behaviors of the policies produced by LIPO with and without the MI objective in PMR-C. We can see the effect of the MI objective in the variety of the trajectories. Overall, each agent exhibits larger variations given a small MI regularization λMI = 0.5. This result aligns with our motivation of using the MI objective to learn variations of each solution. With or without the MI regularization, LIPO discovers all the landmarks with N = 4.
4.5 DISCOVERING RECIPES IN MULTI-RECIPE OVERCOOKED
Overcooked, a collaborative cooking game, has been used to study the cooperative ability of learned agents in prior works (Carroll et al., 2019; Charakorn et al., 2020; Strouse et al., 2021; McKee et al., 2022). To investigate the usefulness of LIPO in a high-dimensional environment, we implement a more complex version of the game based on the work of Wu et al. (2021); players have to complete and serve one of the six pre-defined recipes as fast as possible, as opposed to delivering a single menu item re-
peatedly. We emphasize that this environment is much more challenging than the ones in the previous experiments because of various aspects: First, it has a sparse reward signal. Second, there are multiple sub-tasks. Third, different recipes have different sub-tasks. Each of these characteristics of the environment complicates the process of finding diverse solutions. Furthermore, we note that recipes containing a carrot or a tomato are harder to complete than other recipes as they involve an additional coordination step. Particularly, carrot and tomato have to be sent over by the agent on the right, unlike lettuce and onion. Fig. 7 shows an overview of the game.
The goal in this experiment is to learn a population of behaviorally diverse agents. We choose to quantify the diversity of a population based on the entropy of its recipe distribution. For a population
P , we approximate the probability of recipe i being completed as P (recipei|P) ≈ ∑ πA mi(πA)∑
i ∑ πA mi(πA) ,
where mi(πA) denotes the frequency of recipe i under a joint policy πA. The recipe frequencies, {mi(πA)|1 ≤ i ≤ 6}, for each joint policy πA ∈ P are measured by counting the completed recipes from 1,000 self-play episodes. For Multi SP, TrajeDi and LIPO, we set N = 8. For Multi SPMI and Multi MAVEN, we use nseed = 8 and |z| = 8.
Results: Quantitatively, Tab. 1 shows that LIPO has the highest population recipe distribution entropy, averaged over five random seeds. This result indicates that LIPO populations use all recipes more uniformly than the baseline populations, even though some recipes take longer to complete or are harder to find by random exploration. The recipe distribution of populations produced by each method can be found in Fig. 8. We find that LIPO populations with λMI = 0 still, similar to λMI = 0.5, consistently learn to use the hard-to-find Tomato & Carrot Salad recipe. However, the frequencies of Chopped Tomato and Chopped Carrot are lowered (Fig. 8f). This means that there are multiple joint policies that learn to complete the same recipe while being incompatible with each other. We suspect that using λMI > 0 alleviates this problem by regularizing each joint policy to represent a policy with broader state-action coverage (e.g., learn multiple ways of completing a recipe), indirectly pushing other joint policies to use different recipes in order to be incompatible. Qualitatively, we can see in App. J.2 that the baselines produce agents with similar recipe frequencies. The resulting populations, thus, contain agents with a similar recipe preference. In contrast, LIPO produces agents with distinct recipe frequencies, collectively making the population more diverse than the baselines. We also visualize the behaviors learned by LIPO in App. J.3.
4.6 TRAINING GENERALIST AGENTS WITH GENERATED POPULATIONS
In addition to evaluating the diversity of agents in Overcooked, we quantify the usefulness of the produced agents by using them as training partners of a generalist agent and test the agent with held-out populations. Intuitively, more diverse training partners would enable the agent to generalize and coordinate with unseen agents better. Additionally, we include a population of six specialized SP policies where each policy is trained to complete a specific recipe by adjusting the reward function. The specialist population is created for evaluating the agent when the partner has a strong preference.
Fig. 9a shows that all generalist agents perform similarly when tested with held-out baseline populations. However, the agents trained with a baseline population perform poorly when matched with held-out LIPO and specialist populations. In contrast, those trained with a LIPO population perform better in both situations. Specifically, they have a significantly higher success rate when paired with the specialist with a strong preference for Tomato & Carrot Salad as shown in Fig. 9b. We attribute the success rate difference to the fact that this recipe has a lower completion probability in all except LIPO populations. As a result, generalist agents trained with a LIPO population perform better in terms of the overall success rate when tested with specialist agents. Overall, training with a LIPO population helps the generalist agents to better coordinate with more partner types as indicated by the harmonic means.
5 RELATED WORK
Learning a collection of diverse agents has been utilized in various contexts (Parker-Holder et al., 2020; Sun et al., 2020; Zahavy et al., 2021; Zhou et al., 2021). In the cooperative domain, Canaan et al. (2019; 2020) use the Quality Diversity (QD) algorithm (Mouret & Clune, 2015; Pugh et al., 2016) to produce a population of behaviorally diverse agents. QD, however, requires domain knowledge to encode different types of behaviors. For example, in CMG, the algorithm requires the mapping between actions and corresponding solutions. In Overcooked, it needs to know all possible recipes beforehand. Without such a domain knowledge, it would be difficult to use QD to produce a diverse population. TrajeDi (Lupu et al., 2021) produces a diverse population of agents based on the trajectory distribution. Finally, MEP (Zhao et al., 2021) trains a population of agents with an auxiliary reward based on population entropy. Like TrajeDi and MEP, ours does not require domain-specific knowledge. However, to promote behavioral diversity, LIPO utilizes the expected returns of different policy pairs as opposed to state-action information.
The idea of diversifying the empirical return has been explored in the context of finding diverse solutions in non-transitive competitive games (Liu et al., 2021; Balduzzi et al., 2019; Perez-Nieves et al., 2021). In particular, Liu et al. (2021) share some similar ideas with our work. They propose to use the expected returns, when encountering different opponents, and state-action information to promote diversity of agents. A concurrent work by Rahman et al. (2022) applies a similar idea of diversifying the expected joint return to generate diverse partners in cooperative settings. LIPO can be thought of as a special case designed specifically for cooperative environments (see App. E).
MI objectives have been used in RL to learn diverse behaviors (Eysenbach et al., 2018; Sharma et al., 2019; Kumar et al., 2020; Osa et al., 2022). In cooperative MARL, MAVEN (Mahajan et al., 2019) optimizes both RL and MI objectives to encourage the agents to explore in a committed manner and discover diverse solutions. Also, Any-play (Lucas & Allen, 2022) uses a similar objective to produce training partners with many solutions for a generalist agent. In contrast, our approach uses the MI objective to regularize each policy to learn local variations of each solution.
6 CONCLUSION
We propose LIPO, a simple and generic method that can create a population of diverse agents in cooperative multi-agent environments. Unlike previous work that uses state-action information from joint trajectories, LIPO utilizes the concept of policy compatibility to create diverse policies. This alternative view of quantifying diversity makes LIPO more robust to state and action spaces. Also, LIPO uses the MI objective to learn local variations of each solution. Empirically, LIPO consistently produces more diverse populations than the baselines across a variety of three multi-goal environments. Finally, in multi-recipe Overcooked, LIPO produces populations of diverse partners that help the generalist agents to generalize to unseen agents better. We include further discussions and limitations of LIPO in App. F and G.
ACKNOWLEDGEMENT
This work is partially supported by King Mongkut’s Institute of Technology Ladkrabang [2566-02- 06-002]. We thank Natchaya Sricom for drawing Fig. 1 and 7. We thank Supasorn Suwajanakorn, Sucha Supittayapornpong and Maytus Piriyajitakonkij for their suggestions on early draft versions. We also thank anonymous reviewers for their constructive feedbacks.
REPRODUCIBILITY STATEMENT
We have include additional information to reproduce the experimental results in the supplementary text:
• Environment details (App. C)
• Pseudocode and implementation details (App. D)
• Hyperparameters used in all experiments (App. H and I)
The source code is available at https://github.com/51616/marl-lipo.
A PROOF FOR THEOREM 3.1
We prove the relationship of similar policies and compatible policies in Theorem 3.1 under the following assumptions.
Assumption A.1 (All joint trajectories are supported by πB). P (τ |πiB , π j B) > 0; ∀τ ∈ T
Assumption A.2 (Shared ϵ). A common 0 ≤ ϵ ≤ 1 is used for Def. 2.1 and 2.2 Assumption A.3 (Positive return). G(τ) > 0;∀τ ∈ T Theorem. If πiA is similar to πiB , then πiA is compatible with πB .
Proof. Let r(τ) = P (τ |π i A,π j B)
P (τ |πiB ,π j B)
. Because πiA is similar to π i B , we know that 1− ϵ ≤ r(τ) ≤ 1+ ϵ; ∀τ
from Def. 2.1. Consequently, we can use importance sampling to write J(πiA, π j B) in relation to J(πiB , π j B):
J(πiA, π j B) = Eτ∼ρ(πiA,πjB)G(τ)
= ∫ τ P (τ |πiA, π j B)G(τ)
= ∫ τ P (τ |πiB , π j B) P (τ |πiB , π j B) P (τ |πiA, π j B)G(τ)
= ∫ τ r(τ)P (τ |πiB , π j B)G(τ) = Eτ∼ρ(πiB ,πjB)r(τ)G(τ)
Because 1 − ϵ ≤ r(τ) ≤ 1 + ϵ and G(τ) > 0;∀τ ∈ T , the expected return J(πiA, π j B) has the following upper and lower bounds:
(1− ϵ)Eτ∼ρ(πiB ,πjB)G(τ) ≤J(π i A, π j B) ≤ (1 + ϵ)Eτ∼ρ(πiB ,πjB)G(τ)
(1− ϵ)JSP(πB) ≤J(πiA, π j B) ≤ (1 + ϵ)JSP(πB)
This means that πiA is compatible with πB (Def. 2.2).
∴ If πiA is similar to π i B , then π i A is compatible with πB .
Remark: Assumption A.3 can be satisfied by offsetting the joint return of all trajectories such that minτ G(τ) > 0. However, since we use MAPPO as the base algorithm, the expected return is subtracted by a baseline to compute the advantage during the policy update, which removes the effect of the offset. In practice, even under environments that do not satisfy G(τ) > 0, LIPO can still discover diverse solutions effectively, as shown in the experiments.
B DERIVATION OF THE LOWER BOUND OF THE MI OBJECTIVE
We provide derivation of Eq. 7 here. Let p(zi|{oi, ai}) be the true posterior of zi and qϕ be the approximation of p parameterized by ϕ. The lower bound of I({oi, ai}|zi) can be derived as follows:
I({oi, ai}; zi) = H(zi)−H(zi|{oi, ai}) = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log p(zi|{oi, ai})− log qϕ(zi|{oi, ai}) + log qϕ(zi|{oi, ai})] = H(zi) + Ezi,(oi,ai)[log qϕ(zi|{oi, ai})] + KL(p(zi|{oi, zi})||qϕ(zi|{oi, zi}))
I({oi, ai}; zi) ≥ H(zi) + Ezi,(oi,ai)[log qϕA(zi|{oi, ai})]
C ADDITIONAL ENVIRONMENT DETAILS
C.1 ONE-STEP COOPERATIVE MATRIX GAME
A game of CMG is defined by a tuple (M, {km}, {rm}), where M is the number of solutions. For m ∈ {1, ...,M}, km is the number of compatible actions and rm is the reward of a solution m. The game is stateless and terminate immediately after both players simultaneously choose an action. By choosing the same solution, both players get a reward rm associated with the chosen solution. This means that the solutions are not equally optimal if the values in {rm} are not identical. Similarly, if the values in {km} are not identical then the solutions are not equally likely to be chosen by a uniform joint policy. We consider two setups of CMG: sub-optimal (CMG-S) and hard-to-find (CMG-H).
For CMG-S, we set (M = 32, km = 8, rm = 0.5 ∗ (1 + m−1M−1 )), which causes each solution to have a different reward, ranging from 0.5 to 1. There are 32 solutions, each with 8 compatible actions. There are 32× 8 = 256 possible actions for each player. For CMG-H, we use (M = 32, km = m, rm = 1), which makes solutions with a smaller number of compatible actions harder to be found by random exploration. There are 32 solutions, each with different number of compatible actions, ranging from 1 to 32. The number of available actions for each player is ∑m=32 m=1 m = 528.
C.2 POINT MASS RENDEZVOUS (PMR)
PMR is based on the the Multi-Agent Particle Environment (Lowe et al., 2017; Terry et al., 2020). The observation of each agent includes absolute position, current velocity, and the relative distance to the landmarks and the other agent. These features are concatenated as a 1-D vector of length 14. The possible actions are: no op, move, {up, down, left, right}. In PMR-C, the start positions of the agents are {(0.3,0), (-0.3,0)} and the landmarks positions are {(1.59, 1.59), (1.59, -1.59), (-1.59, 1.59), (-1.59, -1.59)}. For PMR-L, the start and the landmark positions are {(1,0),(0,1)} and {(0,2.25),(0,0.75),(0,-0.75),(0,-2.25)}. An episode will be terminated after 50 timesteps. The agents are incentivized to go to the same landmark and stay close together with the reward function
rt = 1− d(pi, c)−min l∈L d(l, c),
where d(·, ·) is the euclidean distance between two points, pi is the 2-d coordinate of agent i, c is the average coordinate of all agents, and L is the set of all landmarks.
C.3 MULTI-RECIPE OVERCOOKED
We implement a multi-recipe of the game based on the work of Wu et al. (2021). In this version of Overcooked, there are four ingredients: lettuce, onion, tomato, and carrot. The ingredients are randomly placed at pre-defined positions in the layout. Particularly, the lettuce and the onion are randomly placed on the left or the middle counter. The tomato and the carrot are randomly placed on the right or the middle counter. These ingredients can be composed into different recipes making each ingredient unique: four recipes (LettuceSalad, TomatoSalad, ChoppedCarrot, ChoppedOnion) require only a single ingredient, while the other two (TomatoLettuceSalad, TomatoCarrotSalad) require two ingredients. The ingredients have to be chopped at the chopping station before placing on the plate. After the required ingredients are put on the plate, they must be delivered to the delivery station.
Both players have the same egocentric observation and action spaces. The observation is a set of hand-crafted features that represent a local view of the environment. Specifically, we use the following features: absolute position and facing direction, relative distance to the objects and the other agent, state of the ingredients, four booleans indicating if the agent is next to a counter in four cardinal positions, currently held items, the state of the held foods, and the type and state of the items in front of the agent. These features are concatenated as a 1-D vector of length 54. At every timestep, each player has to choose one of the six possible actions: no op, move {up, down, left, right}, and interact. An episode lasts at most 200 timesteps and terminates immediately after a successful delivery. An episode without delivery is considered unsuccessful. We incentivize the agents to interact with the
objects and deliver as fast as possible with the following reward function:
rt = rinteract + rprogress + rcomplete − p,
here rinteract is a shaped reward given when an agent interacts with an object for the first time in an episode, rprogress is given when the players progress toward a recipe completion (i.e., chopping required ingredients or putting chopped ingredients on the plate), rcomplete is given upon successful delivery, and p is a penalty. We use rinteract = 0.5, rprogress = 1.0, rcomplete = 10, and p = 0.1. We note that recipes with more than one ingredient will give only slightly higher rewards (rinteract + rprogress) but are significantly harder to be discovered by random exploration than those with one ingredient.
Additional experimental details: For specialist agents, the rewards are given when interacting, progressing, or completing a specific recipe. For the held-out populations, we remove incompetent policies with the expected return of less than zero from the test populations created by TrajeDi and LIPO. We do not remove those in the training populations. We do this because testing with an incompetent policy does not give any meaningful information, as almost all episodes will be unsuccessful.
D IMPLEMENTATION DETAILS
Algorithm 1: Training process of LIPO (on-policy) This pseudocode is based on self-play. Blue text is related to the MI objective. LIPO specific code is highlighted in green. Input: A Population P = {πA |1 ≤ A ≤ N}, the number of XP pairs used to approximate J̃xp(πA,P)
(nxp), the number of players in an episode (m), and the number of SP and XP episodes per iteration (ESP and EXP).
while not done do for A ∈ {1, ..., N} do Bsp ← GetEpisodeRollouts(πA, πA, ESP,m) Compute JSP(πA) using Bsp Bxp ← GetCrossPlayRollouts(πA,P, nxp, EXP,m) Compute J̃XP(πA,P) using Bxp (Eq. 4) Compute LMI using Bsp and Bxp (Eq. 9) θA ← θA −∇θA [−JSP+λXPJ̃XP+λMILMI] ϕA ← ϕA − λMI∇ϕALMI
Algorithm 2: Rollout collection functions Function GetEpisodeRollouts(πA, πB , E): B ← {} for i ∈ {1, ...,m} do
for episode ∈ {1, .., E m } do
z1, ..., zm ∼ p(z1, ..., zm) zj ← { zk }m k ̸=i πj(·|·, zj) = Πk ̸=iπk(·|·, zk) τ ∼ ρ(πiA(·|·, zi), πjB(·|·, z
j)) B ← B ∪ {τ}
return B Function GetCrossPlayRollouts(πA,P, nxp, EXP): Bxp ← {} P ′−A ← SampleWithoutReplacement(P−A, nxp) for πB ∈ P ′−A do B ← GetEpisodeRollouts(πA, πB , EXP|P′−A|) Bxp ← Bxp ∪ B
return Bxp
The pseudocode for LIPO is shown in Algorithm 1 and 2. If there are more than two players (m > 2), πj would represent the joint policy of all players except player i, πj(ajt |τ j t ) = Πk ̸=iπ
k(akt |τkt ). We note that scaling LIPO to more than two players does not increase the training time. It is the same as the two-player setting as long as the numbers of SP and XP episodes are the same. In practice,
however, more XP episodes might be needed to better estimate J̃XP. We use the parameter sharing technique for better sample efficiency and faster convergence (Tan, 1993; Foerster et al., 2018; Rashid et al., 2018). Assuming that a policy πiA is a neural network parameterized by θ i A, this means that for a joint policy (π1A, π 2 A), we have θ 1 A = θ 2 A. Still, π 1 A and π 2 A can behave differently as they observe different parts of the environment and have a different player indicator concatenated with their local observations.
All methods are implemented on top of MAPPO except MAVEN. The critic, policy, and discriminator are feed-forward neural networks with two hidden layers, each having 64 units. For a fair comparison, we use the same or more environment steps in the policy update of the baselines compared to LIPO. Common hyperparameters of methods based on MAPPO are shown in Table. 2.
D.1 MAPPO
MAPPO is the base MARL algorithm for all baselines except MAVEN. The policy parameters are shared among all policies. The critic takes a state of the environment and outputs an expected return of a given global state. The global state is provided by the environment and only used during training. For the complete training objectives of MAPPO, we refer the reader to Appendix A of Yu et al. (2021).
D.2 MULTI SP
A simple but effective way to produce diverse agents by training multiple SP agents with different neural network initializations and random seeds. Specifically, each run produces a joint policy πA that maximizes JSP(πA) using MAPPO.
D.3 SPMI
A single run of SP agent trained with added MI objective I(z|oi, ai). SPMI uses a shared z for both policies and considers each z as a different joint policy. We train SPMI using the same training procedure as LIPO by setting N = 1, λXP = 0 and z1 = z2. The discriminator takes a local
observation oi and action distribution πi(·|oi) as inputs and outputs the discrete probability of the latent variable. The latent variable of all policies is shared during an episode.
D.4 MAVEN
MAVEN (Mahajan et al., 2019) is explicitly designed for learning diverse solutions in cooperative multi-agent environments. A joint policy is represented as π(·|τ, z), and each mode of behavior is represented by the latent variable z. Similar to SPMI, MAVEN uses a shared z for all policies. We use the same network architecture presented in Mahajan et al. (2019) with recurrent neural networks. However, we do not use the hierarchical policy but sample z from the uniform distribution. The latent variable of all policies is shared.
D.5 MULTI SPMI AND MULTI MAVEN
A population containing joint policies from multiple runs of SPMI and MAVEN. Like Multi SP, each run has different neural network initializations and random seeds. The population size is |P| = nseed|z|, where nseed is the number of runs. Notably, this baseline uses the training data differently from the base algorithms. Instead of training a long single run, this approach allows the policy to ”restart” by using different initialization of neural networks. For example, training a single run with |z| = N,nseed = 1 might not discover as many solutions as training nseed runs with |z| = Nnseed even though the population size is the same. Empirically, we find that multiple shorter runs can find more solutions compared to a single long run of the corresponding algorithm. Thus, we omit the results of the base algorithms in multi-recipe Overcooked.
D.6 TRAJEDI
TrajedDi produces a population of diverse agents that also maximize the expected return in cooperative environments. The diversity measure of this method is based on the Jensen-Shannon divergence (JSD) between the trajectory distribution of each policy. Different from the original implementation, we remove the best-response (BR) policy from the population. Since the BR policy might work well with only a subset of solutions, removing BR potentially increase the number of variations in the population. Our modified loss is:
L = −[ N∑
A=1
(JSP(πA)) + αJSDγ(π1, ..., πN )], (11)
where JSD is the proposed diversity objective of TrajeDi, and α and γ are the hyperparameters of TrajeDi.
D.7 LIPO
LIPO uses the same implementation as SPMI except LIPO uses independent latent variable for each policy and λXP > 0. Additionally, LIPO uses extra critics for the XP trajectories. In total, LIPO has an SP critic V πAsp and N − 1 XP critics {V πA,πBxp | πB ∈ P−A}. In each training iteration, LIPO collects SP and XP trajectories of all policy combinations. MAPPO is used for both maximizing JSP and minimizing J̃XP. The critics are trained using SP trajectories and all of XP trajectories, while the policy is trained using SP trajectories and XP trajectories from the XP pair that has the highest joint return, max(Bxp).
D.8 GENERALIST AGENT
The policy and critic networks of a generalist agent use two 256-unit GRU layers (Cho et al., 2014) followed by a linear layer. The input also includes the reward and action of the previous timestep. We use MAPPO for training a generalist agent. We train both the policy and critic with the batch size of 320,000 timesteps using truncated backpropagation through time (BPTT). The samples are reused for 15 epochs. Each minibatch contains 1,600 sequences with a maximum length of 50 timesteps. We also use learning rate annealing, specifically the generalist agent starting from 0.005 to 0.003 with linear scheduling. Other hyperparameters are shared with other methods (Tab. 2). A training
partner for a generalist agent is sampled uniformly from the training population at the beginning of an episode.
E RELATIONSHIP WITH RAHMAN ET AL. (2022)
Rahman et al. (2022) propose to optimize the self-play returns while maximizing the diversity term Div(C), where C is a N ×N cross-play payoff matrix. Specifically, they propose to learn diverse policies via the following objective:
max C Tr(C) + Div(C) (12)
They propose to use Div(C) = Det(κ(C)) where κ(C) is an N × N matrix with κi,j(C) being similarity between policy πi and πj . The radial basis function (RBF) kernel of the empirical returns is used to measure the similarity between two policies:
κi,j(C) = exp(− ||Ci,· − Cj,·||2
σ2 ), (13)
where Ci,· is row ith of C. In other words, Ci,· is the vector containing the empirical return of πi when matched with other policies in the population. Intuitively, this objective diversifies the policies via the expected returns similar to LIPO. Using the same notation of the cross-play matrix, we can write the objective of training a LIPO population as:
max C Tr(C)− λXP ∑ i max 1≤j≤N
i ̸=j
Ci,j (14)
This objective wants the diagonal (JSP) to be maximized and the off-diagonal entries of the cross-play matrix (J̃XP) to be minimized. This is a special case of Eq. 12 where Div(C) is based on policy compatibility.
F DISCUSSIONS
Agents trained with LIPO are incentivized to act adversarially toward agents that behave differently from itself. This behavior might not be desirable for certain downstream tasks. For example, agents produced by LIPO might not be suitable for interacting with humans as they would refuse to conform with the user. However, as shown in Sec. 4.6, training a generalist agent with these agents would have the opposite effect: the generalist agent would try to comply with the current partner’s preference.
LIPO produces a population of near-optimal solutions, a generalist agent trained with a LIPO population might not coordinate well with significantly sub-optimal agents. In a prior work, Strouse et al. (2021) show that augmenting the training population with past checkpoints (FCP) helps the trained generalist agent to effectively coordinate with sub-optimal agents. Since LIPO and FCP are orthogonal, populations created by LIPO can be also augmented in the same way as FCP.
Previous works find incompatible policies to be undesirable since they are generally results of coordinated symmetry breaking (Bard et al., 2020; Hu et al., 2020; 2021); these policies perform poorly when interacting with unseen partners. However, we show that learning incompatible policies can be useful for generating behaviorally diverse agents in various scenarios. The produced agents can then be used as training partners for a generalist agent. Using LIPO in environments where many solutions are equivalent may produce such undesirable symmetry-breaking conventions. We believe that LIPO can be combined with other techniques, e.g., other-play (Hu et al., 2020) and equivariant coordinator (Muglich et al., 2022), to avoid learning arbitrary symmetry-breaking. We leave the study of the combination of LIPO and these techniques for future work. A concurrent work by Cui et al. (2023) proposes an extension of LIPO by combining insights from off-belief learning (Hu et al., 2021) to avoid ”sabotaging” behavior of LIPO agents.
G LIMITATIONS
LIPO requires an additional hyperparameter λXP. If λXP is too big, it is possible that the main RL objective, JSP, would be interfered which will result in an incompetent joint policy (Sec. 4.2). An
adaptive mechanism that selects a suitable value for λXP at different stages of training could help increase training stability.
Although LIPO can be fully parallelized, it requires more computation than the baselines to get an accurate approximation of J̃XP, which makes it harder to scale up to bigger population sizes. Instead of collecting all policy pairs, sampling a portion of policy pairs to approximate J̃XP could reduce computation cost and training time at a potential cost of diversity (Sec. 4.3). Instead of using a uniform sampling, a mechanism that selects the best pair to sample (e.g., bandit algorithm) might help mitigate the diversity loss from using lower nxp.
In this work, we only investigate the effectiveness of LIPO under a new set of cooperative environments with multiple discrete solutions. Investigating LIPO in other well-known environments where the solution space might be continuous, e.g., continuous control (Peng et al., 2021), or environments with only a single goal, e.g., SMAC (Samvelyan et al., 2019), might yield interesting results and pose different challenges. We leave this further investigation as our future work.
H HYPERPARAMETERS (CMG AND PMR)
We provide the searched values of each method in Tab. 3, 4, 5, 6, 7 and 8. The hyperparameters are searched individually for each population size. We use three random seeds for each set of hyperparameters. We do not use any validation method. Instead, we present the results using the best parameters in the main paper. For LIPO, we set λMI as 0.0 and 0.5 in CMG and PMR, respectively.
I HYPERPARAMETERS (OVERCOOKED)
For each method, we use the parameters that give the highest entropy to generate five populations for Sec. 4.5, and Sec. 4.6. The searched values of each method are:
• TrajeDi: We perform a grid search with following hyperparameters: α ∈ {5, 10} and γ ∈ {0, 0.5}. We use α = 5 and γ = 0.5 for the results in the paper.
• Multi SPMI: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• Multi MAVEN: We perform a grid search of λMI ∈ {5, 10}. We use λMI = 5 for the results in the paper.
• LIPO: We perform a grid search with following hyperparameters: λXP ∈ {0.2, 0.3} and λMI ∈ {0.1, 0.5}. We use λXP = 0.3 and λMI = 0.5 for the results in the paper.
J ADDITIONAL RESULTS
J.1 ADDITIONAL ABLATION RESULTS
We provide additional results of numbers of learned solutions with varying N and λXP (Fig. 10), numbers of competent agents in CMG with varying N and λXP (Fig. 11), and numbers of learned solutions with varying nxp in CMG (Fig. 12) here. These results are consistent with the analysis presented in Sec. 4.2 and 4.3.
J.1.1 ADDITIONAL RESULTS WITH VARYING N AND λXP
J.1.2 ADDITIONAL RESULTS WITH VARYING nxp
J.2 RADAR PLOTS OF RECIPE FREQUENCIES
Fig. 13 shows recipe frequencies of the population with highest recipe entropy (out of five runs) from each method. The frequencies are calculated based on completed recipe from 1,000 self-play episodes of each agent as described in Sec. 4.5. Qualitatively, we can see that agents in a LIPO
population have more distinct recipe frequencies. That is, agents are different from each other in terms of recipe preference.
J.3 VISUALIZATION OF BEHAVIORS
We visualize the behaviors joint policies produced by LIPO in PMR and Overcooked at https: //sites.google.com/view/iclr-lipo-2023. Here, we show snapshots of four joint policies that have a distinct recipe preference in Overcooked. | 1. What is the focus and contribution of the paper on multi-agent reinforcement learning?
2. What are the strengths of the proposed approach, particularly in terms of its ability to learn diverse behaviors?
3. What are the weaknesses of the paper regarding its limitations in scalability?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work introduces LIPO (Learning Incompatible Policies), a MARL method that uses a policy compatibility-based objective and a mutual information regularizer in order to learn a diverse set of behaviours in cooperative tasks. LIPO is evaluated in a wide set of environments and shows a superior capacity to discover policies against the compared baselines. Additionally, the work also includes a comprehensive study of the parameter values and their effect on the learned behaviours.
Strengths And Weaknesses
Strength:
Diverse of benchmarks and baselines
Comprehensive study of the impact of the parameters
Ample discussion of results, as well as limitations of the method
Weakness:
A missing more detailed discussion on scalability, in terms of number of agents, since the current work assumes always 2 agents.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly and well-written. It is novel and original, as it uses the concepts of policy compatibility and mutual information in novel manners to create a diverse set of behaviours for cooperative tasks. The work includes a reproducibility statement and is accompanied by the code (however I did not run the code myself). |
ICLR | Title
Vector Quantized Wasserstein Auto-Encoder
Abstract
Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks. Recent work on Vector Quantized Variational Auto-Encoder (VQ-VAE) has made substantial progress in this direction. However, this quantizes latent representations using the online k-means algorithm which suffers from poor initialization and non-stationary clusters. To strengthen the clustering quality for the latent representations, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE) intuitively developed based on the clustering viewpoint of Wasserstein (WS) distance. Specifically, we endow a discrete distribution over the codewords and learn a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution. Finally, we empirically evaluate our method on several well-known benchmarks, where it achieves better qualitative and quantitative performances than the baselines in terms of the codebook utilization and image reconstruction/generation.
1 INTRODUCTION
Learning compact yet expressive representations from large-scale and high-dimensional unlabeled data is an important and long-standing task in machine learning (Kingma & Welling, 2013; Chen et al., 2020; Chen & He, 2021; Zoph et al., 2020). Among many different kinds of methods, Variational Auto-Encoder (VAE) (Kingma & Welling, 2013) and its variants (Tolstikhin et al., 2017; Alemi et al., 2016; Higgins et al., 2016; Voloshynovskiy et al., 2019) have shown great success in unsupervised representation learning. Although these continuous representation learning methods have been applied successfully to various problems ranging from images (Pathak et al., 2016; Goodfellow et al., 2014; Kingma et al., 2016), video and audio (Reed et al., 2017; Oord et al., 2016; Kalchbrenner et al., 2017), in some contexts, input data are more naturally modeled and encoded as discrete symbols rather than continuous ones. For example, discrete representations are a natural fit for complex reasoning, planning and predictive learning (Van Den Oord et al., 2017). This motivates the need of learning discrete representations, preserving the insightful characteristics of input data.
Vector Quantization Variational Auto-Encoder (VQ-VAE) (Van Den Oord et al., 2017) is a pioneer generative model, which successfully combines the VAE framework with discrete latent representations. In particular, the vector quantized models learn a compact discrete representation using a deterministic encoder-decoder architecture in the first stage, and subsequently applied this highly compressed representation for various downstream tasks, examples including image generation (Esser et al., 2021), cross-modal translation (Kim et al., 2022), and image recognition (Yu et al., 2021). While VQ-VAE has been widely applied to representation learning in many areas (Henter et al., 2018; Baevski et al., 2020; Razavi et al., 2019; Kumar et al., 2019; Dieleman et al., 2018; Yan et al., 2021), it is known to suffer from codebook collapse, which has a low codebook usage, i.e. most of embedded latent vectors are quantized to just few discrete codewords, while the other codewords are rarely used, or dead, due to the poor initialization of the codebook, reducing the information capacity of the bottleneck (Roy et al., 2018; Takida et al., 2022; Yu et al., 2021).
To mitigate this issue, additional training heuristics were proposed, such as the exponential moving average (EMA) update (Van Den Oord et al., 2017; Razavi et al., 2019), soft expectation maximization (EM) update (Roy et al., 2018), codebook reset (Dhariwal et al., 2020; Williams et al., 2020). Notably, soft expectation maximization (EM) update (Roy et al., 2018) connects the EMA update
with an EM algorithm and softens the EM algorithm with a stochastic posterior. Codebook reset randomly reinitializes unused/low-used codewords to one of the encoder outputs (Dhariwal et al., 2020) or those near codewords of high usage Williams et al. (2020). Takida et al. (2022) suspects that deterministic quantization is the cause of codebook collapse and extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Additionally, WS distance has been applied successfully to generative models and continuous representation learning (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017) owing to its nice properties and rich theory. It is natural to ask: ”Can we take advantages of intuitive properties of the WS distance and its mature theory for learning highly compact yet expressive discrete representations?” Toward this question, in this paper, we develop solid theories by connecting the theory bodies and viewpoints of the WS distance, generative models, and deep discrete representation learning. In particular, a) we first endow a discrete distribution over the codebook and propose learning a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them; b) To devise a trainable algorithm, we develop Theorem 3.1 to equivalently turn the above WS minimization to push-forwarding the data to codeword distributions via minimizing a WS distance between ”the latent representation and codeword distributions”; c) More interestingly, our Corollary 3.1 proves that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated to a centroid. We argue and empirically demonstrate that using the clustering viewpoint of a WS distance to learn the codewords, we can obtain more controllable and better centroids than using a simple k-means as in VQ-VAE (cf. Sections 3.1 and 5.2).
Our method, called Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), applies the WS distance to learn a more controllable codebook, hence leading to an improvement in the codebook utilization. We conduct comprehensive experiments to demonstrate our key contributions by comparing with VQ-VAE (Van Den Oord et al., 2017) and SQ-VAE (Takida et al., 2022) (i.e., the recent work that can improve the codebook utilization). The experimental results show that our VQ-WAE can achieve better codebook utilization with higher codebook perplexity, hence leading to lower (compared with VQ-VAE) or comparable (compared with SQ-VAE) reconstruction error, with significantly lower reconstructed Fréchlet Inception Distance (FID) score (Heusel et al., 2017). Generally, a better quantizer in the stage-1 can naturally contribute to stage-2 downstream tasks (Yu et al., 2021; Zheng et al., 2022). To further demonstrate this, we conduct comprehensive experiments on four benchmark datasets for both unconditional and class-conditional generation tasks. The experimental results indicate that from the codebooks of our VQ-WAE, we can generate better images with lower FID scores.
2 VECTOR QUANTIZED VARIATIONAL AUTO-ENCODER
Given a training set D = {x1, ..., xN} ⊂ RV , VQ-VAE (Van Den Oord et al., 2017) aims at learning a codebook which is formed by set of codewords C = [ck] K k=1 ∈ RK×D on the latent space Z ∈ RD, an encoder fe to map the data examples to the codewords, and a decoder fd (i.e., q (x | z)) to reconstruct accurately the data examples from the codewords. Given a data example x, the encoder fe (i.e., p (z | x)) associates x to the codeword f̄e (x) = c defined as
c = argminkdz (fe (x) , ck) ,
where dz is a metric on the latent space.
The objective function of VQ-VAE is as follows: Ex∼Px [ dx ( fd ( f̄e (x) ) , x ) + dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) ] ,
where Px = 1N ∑N
n=1 δxn is the empirical data distribution, sg specifies stop gradient, dx is a cost metric on the data space, and β is set between 0.1 and 2.0 (Van Den Oord et al., 2017) and dz (fe (x) , C) = ∑ c∈C dz (fe (x) , c).
The purpose of VQ-VAE training is to form the latent representations in clusters and adjust the codewords to be the centroids of these clusters.
3 CONTROLLABLE CODEBOOKS WITH WASSERSTEIN QUANTIZATION
We present the theoretical development of our VQ-WAE framework which connects the viewpoints of the WS distance, generative models, and deep discrete representation learning in Section 3.1. Specifically, we propose to learn a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them (Figure 1a (Top)). We then turn the above WS minimization to push-forwarding the data to codeword distribution via minimizing a WS distance between ”the latent representation and codeword distributions” (Figure 1a (Bottom)). We prove that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated with a centroid (Figure 1b). Based on the theoretical development, we devise a practical algorithm for VQ-WAE in Section 3.2.
3.1 THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a codebook C = {ck}Kk=1 ⊂ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in the data. We first endow a discrete distribution over the codewords as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K−1 = {π′ ≥ 0 : ∥π′∥1 = 1}. We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebook C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (1)
where Px = 1N ∑N n=1 δxn is the empirical data distribution and dx is a cost metric on the data space.
We interpret the optimization problem (OP) in Eq. (1) as follows. Given a discrete distribution Pc,π on the codewords, we use the decoder fd to map the codebook C to the data space and consider Wdx (fd#Pc,Px) as the codebook-data distortion w.r.t. fd. We subsequently learn fd to minimize the codebook-data distortion given Pc,π and finally adjust the codebook C and π to minimize the optimal codebook-data distortion. To offer more intuition for the OP in Eq. (1), we introduce the following lemma.
Lemma 3.1. Let C∗ = {c∗k}k , π ∗, and f∗d be the optimal solution of the OP in Eq. (1). Assume K < N , then C∗ = {c∗k}k , π ∗, and f∗d are also the optimal solution of the following OP:
min fd min π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (2)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.1 Lemma 3.1 states that for the optimal solution C∗ = {c∗k} , π∗, and f∗d of the OP in Eq. (1), {f∗d (c∗k)} K k=1 become the optimal clustering centroids of the optimal clustering solution which minimizes the distortion. Inspired by Wasserstein Auto-Encoder (Tolstikhin et al., 2017), we establish the following theorem to engage the OP in (1) with the latent space. Theorem 3.1. We can equivalently turn the optimization problem in (1) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (3)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebook.
First, we learn both the codebook C and the weights π. Second, ours seeks a deterministic discrete encoder f̄e mapping data example x directly to a codeword, concurring with vector quantization and serving our further derivations, whereas Theorem 1 in Tolstikhin et al. (2017) involves a probabilistic/stochastic encoder mapping to a continuous latent distribution (i.e., a larger space to search). More importantly, our proof is totally different from that in Tolstikhin et al. (2017) (all proof details are given in Appendix A).
Additionally, f̄e is a deterministic discrete encoder mapping a data example x directly to a codeword. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (4)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codeword to fe (x) and the parameter λ > 0.
Particularly, we can rigorously prove that the two optimization problems of interest in (3) and (4) are equivalent under some mild conditions in Theorem 3.2. This rationally explains why we could solve the OP in (4) for our final tractable solution. Theorem 3.2. If we seek fd and fe in a family with infinite capacity (e.g., the family of all measurable functions), the three OPs of interest in (1, 3, and 4) are equivalent.
Moreover, the OP in (4) conveys important meaningful interpretations. Specifically, by minimizing Wdz (fe#Px,Pc,π) w.r.t. C, π, we aim to learn the codewords that are clustering centroids of fe#Px according to the clustering viewpoint of OT as shown in Corollary 3.1, and similar to VQ-VAE, we quantize fe (x) to the closest codeword using QC (fe (x)) = argminc∈Cdz (fe (x) , c) and try to reconstruct x from this codebook. Corollary 3.1. Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (4) given π and assume K < N , its optimal solution f∗e and C ∗are also the optimal solution of the OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (5)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Corollary 3.1 indicates the aim of minimizing the second term Wdz (fe#Px,Pc,π) in (4). By which, we adjust the encoder fe and the codebook C such that the codewords of C become the clustering
1E.g., σ is the nearest assignment: σ−1 (k) = {f̄e (x) = ck | k = argminkdz (fe (x) , ck)} is set of latent representations which are quantized to kth codeword.
centroids of the latent representations {fe (xn)}n to minimize the codebook-latent distortion (see Figure 1 (Right)). Additionally, at the optimal solution, the optimal assignment function σ∗, which indicates how latent representations (or data examples) associated with the clustering centroids (i.e., the codewords) has a valuable property, i.e., the cardinalities
∣∣(σ∗)−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Remark: Recall the codebook collapse issue, i.e. most of embedded latent vectors are quantized to just few discrete codewords while the other codewords are rarely used. Corollary 3.1 give us important properties: (1) we can control the number of latent representations assigned to each codeword by adjust π, guaranteeing all codewords are utilized, (2) codewords become the clustering centroids of the associated latent representations to minimize the codebook-latent distortion, to develop our VQ-WAE framework.
3.2 PROPOSED FRAMEWORK
One of crucial aims of learning meaningful and well-distributed codewords is to make use of each individual codeword efficiently by solving the OP in (4). Specifically, we wish the latent representations are more uniformly associated with the codewords. Based on Corollary 3.1, pointing out that the numbers of latent representations associated with the kthcodeword is proportional to πk, we hence fix π as a uniform distribution (i.e., Pc,π = ∑K k=1 1 K δck ) to make all the codewords utilized equally by the model, hence boosting the perplexity or the codebook usage.
We now present the practical method based on the OP in (4) with Pc,π = ∑K k=1 1 K δck . At each iteration, we sample a mini-batch x1, ..., xB and then solve the OP in (4) by updating fd, fe and C based on this mini-batch as follows. Let us denote Pb = 1B ∑B j=i δxi as the empirical distribution of embedded vectors. over the current batch. Basically, we learn the optimal transportation plan P ∗ by solving:
Wdz (fe#Pb,Pc,π) = min P∈Γ(1B ,1C) ⟨P,Dc,x⟩ , (6) where 1B = [ 1 B ] B is the vector of atom masses of Pb, 1C = [ 1 C ] C
is the vector of atom masses of Pc,π , Γ (1B , 1C) is the set of feasible transportation plans, and Dc,x = [dz (xi, ck)]i,k ∈ RB×K is the cost matrix.
The pseudcode of our VQ-WAE is summarized in Algorithm 1. We use the copy gradient trick (Van Den Oord et al., 2017) to deal with the back-propagation from decoder to encoder for reconstruction term while Wasserstein regularization term Wdz (fe#Pb,Pc,π) can be optimized directly without further manipulation. Additionally, Wdz (fe#Pb,Pc,π) term is only utilized in the training phase.
Algorithm 1 VQ-WAE 1: Initialize: encoder fe, decoder fd and codebook C. 2: for iter in iterations do 3: Sample a mini-batch of samples x1, ..., xB forming the empirical batch distribution Pb 4: Encode: zi→B = fe(xi→B) // i → B : for i = 1, ..., B 5: Quantize: ci→B = argmink dz (zi→B , ck) // Nearest neighbor assignment 6: Decode: x̃i→B = fd(ci→B) 7: Optimize fe, fd and C by minimizing the objective in (4):
1
B B∑ i=1
[dx (x̃i, xi)] + λ Wdz (fe#Pb,Pc,π)︸ ︷︷ ︸ minP∈Γ(1B,1C) ⟨P,Dc,x⟩
8: end for 9: Return: The optimal fe, fd and C.
4 RELATED WORK
Variational Auto-Encoder (VAE) was first introduced by Kingma & Welling (Kingma & Welling, 2013) for learning continuous representations. However, learning discrete latent representations has
proved much more challenging because it is nearly impossible to accurately evaluate the gradients which are required to train models. To make the gradients tractable, one possible solution is to apply the Gumbel Softmax reparameterization trick (Jang et al., 2016) to VAE, which allows us to estimate stochastic gradients for updating the models. Although this technique has a low variance, it brings up a high-bias gradient estimator. Another possible solution is to employ the REINFORCE algorithm (Williams, 1992), which is unbiased but has a high variance. Additionally, the two techniques can be complementarily combined (Tucker et al., 2017).
To enable learning the discrete latent codes, VQ-VAE (Van Den Oord et al., 2017) uses deterministic encoder/decoder and encourages the codebooks to become the clustering centroids of latent representations. Additionally, the copy gradient trick is employed in back-propagating gradients from the decoder to the encoder (Bengio, 2013). Some further works were proposed to extend VQ-VAE, notably (Roy et al., 2018; Wu & Flierl, 2020). Particularly, Roy et al. (2018) uses the Expectation Maximization (EM) algorithm in the bottleneck stage to train the VQ-VAE for improving the quality of the generated images. However, to maintain the stability of this approach, we need to collect a large number of samples on the latent space. Wu & Flierl (2020) imposes noises on the latent codes and uses a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder and performs soft quantization of the noisy latent codes which have latent representations preserving the similarity relations of the data space. Recently, Takida et al. (2022) extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Wasserstein (WS) distance has been widely used in generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017). Arjovsky et al. Arjovsky et al. (2017) uses a dual form of WS distance to develop Wasserstein generative adversarial network (WGAN). Later, Gulrajani et al. (2017) employs the gradient penalty trick to improve the stability of WGAN. In terms of theory development, mostly related to our work is Wasserstein Auto-Encoder (Tolstikhin et al., 2017) which aims to learn continuous latent representation preserving the characteristics of input data.
5 EXPERIMENTS
In this section, we conduct extensive experiments to show the effectiveness of our proposed method compared to other advances.
Datasets: we empirically evaluate the proposed VQ-WAE in comparison with VQ-VAE (Van Den Oord et al., 2017) that is the baseline method and recently proposed SQ-VAE (Takida et al., 2022) which is the state-of-the-art work of improving the codebook usage, on four different benchmark datasets: CIFAR10 (Van Den Oord et al., 2017), MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CelebA (Liu et al., 2015) and the high-resolution images dataset FFHQ Karras et al. (2019).
Implementation: For a fair comparison, we utilize the same architectures and hyper-parameters for all methods. Additionally, in the primary setting, we use the codeword (discrete latent) dimensionality of 64 and codebook size |C| = 512 for all datasets except FFHQ with codeword dimensionality of 256 and |C| = 1024, while the hyper-parameters {β, τ, λ} are specified as presented in the original papers, i.e., β = 0.25 for VQ-VAE and VQ-GAN (Esser et al., 2021), τ = 1e−5 for SQ-VAE and λ = 1 for our VQ-WAE. The details of the experimental settings are presented in Appendix C.
5.1 RESULTS ON BENCHMARK DATASETS
In order to quantitatively assess the quality of the reconstructed images, we report the results on most common evaluation metrics, including the pixel-level peak signal-to-noise ratio (PSNR), patchlevel structure similarity index (SSIM), feature-level LPIPS (Zhang et al., 2018), and dataset-level Fréchlet Inception Distance (FID) (Heusel et al., 2017). We report the test-set reconstruction results on four datasets in Table 1. With regard to the codebook utilization, we employ perplexity score which is defined as e− ∑K k=1 pck log pck where pck =
Nck∑K i=1 Nci (i.e., Nci is the number of latent
representations associated with the codeword ci) is the probability of the ith codeword being used. Note that by formula, perplexitymax = |C| as P (c) becomes to the uniform distribution, which means that all the codewords are utilized equally by the model.
We compare VQ-WAE with VQ-VAE, SQ-VAE and VQ-GAN for image reconstruction in Table 1. All instantiations of our model significantly outperform the baseline VQ-VAE under the same compression ratio, with the same network architecture. While the latest state-of-the-art SQ-VAE holds slightly better scores for traditional pixel- and patch-level metrics, our method achieves much better rFID scores which evaluate the image quality at the dataset level. Note that our VQ-WAE significantly improves the perplexity of the learned codebook. This suggests that the proposed method significantly improves the codebook usage, resulting in better reconstruction quality. Finally, to complete the assessment, the qualitative results are visualized in Figure 4 (Appendix B).
5.2 DETAILED ANALYSIS
We run a number of ablations to analyze the properties of VQ-VAE, SQ-VAE and VQ-WAE, in order to assess if our VQ-WAE can simultaneously achieve (i) efficient codebook usage, (ii) reasonable latent representation.
5.2.1 CODEBOOK USAGE
We observe the codebook utilization of three methods with different codebook sizes {64, 128, 256, 512} on MNIST and CIFAR10 datasets. Particularly, we present the reconstruction performance for different settings in Table 2 and the histogram of latent representations over the codebook in Figure 2.
As discussed in Section 3.1 and Section 3.2, the number of used centroids reflects the capability of the latent representations. In other words, it represents the certain amount of information is pre-
served in the latent space. By explicitly defining the numbers of latent representations associated with the codebooks to be uniform (i.e., fixing π in (4) as a uniform distribution) in the Wasserstein regularization term, VQ-WAE is able to maximize the information in the codebooks, hence improving the reconstruction capacity. It can be seen from Figure 2 that the latent distribution of VQ-WAE over the codebook is nearly uniform and the codebook’s perplexity almost reaches the optimal value (i.e., the value of perplexities reach to corresponding codebook sizes) in different settings. It is also observed that as the size of the codebook increases, the perplexity of codebook of VQ-WAE also increases, leading to the better reconstruction performance (Table 2), in line with the analysis in (Wu & Flierl, 2018). SQ-VAE also has good codebook utilization as its perplexity is proportional to the size of the codebook. However, its codebook utilization becomes less efficient when the codebook size becomes large, especially in low texture dataset (i.e., MNIST).
On the contrary, the codebook usage of VQ-VAE is less efficient, i.e., there are many zero entries in its codebook usage histogram, indicating that some codewords have never been used (Figure 2). Furthermore, Table 2 also shows the instability of VQ-VAE’s reconstruction performance with different codebook sizes.
5.2.2 VISUALIZATION OF LATENT REPRESENTATION
To better understand the codebook’s representation power, we employ t-SNE (van der Maaten & Hinton, 2008) to visualize the latent representations that have been learned by VQ-VAE, SQ-VAE and VQ-WAE on the MNIST dataset with two codebook sizes of 64 and 512. Figure 3 shows the latent distributions of different classes in the latent space, in which the samples are colored accordingly to their class labels. Figure 3c shows that representations from different classes of VQWAE are well clustered (i.e., each class focuses on only one cluster) and clearly separated to other classes. In contrast, the representations of some classes in VQ-VAE and SQ-VAE are distributed to several clusters and or mixed to each other (Figure 3a,b). Moreover, the class-clusters of SQ-VAE are uncondensed and tend to overlap with each other. These results suggest that the representations learned by VQ-WAE can better preserve the similarity relations of the data space better than the other models.
5.2.3 IMAGE GENERATION
As discussed in the previous section, VQ-WAE is able to optimally utilize its codebook, leading to meaningful and diverse codewords that naturally improve the image generation. To confirm this ability, we perform the image generation on the benchmark datasets. Since the decoder reconstructs images directly from the discrete embeddings, we only need to model a prior distribution over the discrete latent space (i.e., codebook) to generate images.
We employ a conventional autoregressive model, the CNN-based PixelCNN (Van den Oord et al., 2016), to estimate a prior distribution over the discrete latent space of VQ-VAE, SQ-VAE and VQWAE on CIFAR10, MNIST, SVHN and CelebA. The details of generation settings are presented
in Section 3.2 of the supplementary material. The quantitative results in Table 3 indicate that the codebook of VQ-WAE leads to a better generation ability than VQ-VAE and SQ-VAE.
6 CONCLUSION
In this paper, inspired by the nice properties and mature theory of the WS distance allowing it to be applied successfully to generative models and continous representation learning, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), which endows a discrete distribution over the codewords and learns a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We then developed theoretical analysis to show the equivalence of this WS minimization to another OP regarding push-forwarding the data distribution to the codeword distribution, which can be realized by minimizing a WS distance between the latent representation and codeword distributions. We conduct comprehensive experiments to show that our VQ-WAE utilizes the codebooks more efficiently than the baselines, hence leading to better reconstructed and generated image quality.
7 REPRODUCIBILITY STATEMENT
We provide the implementation of our framework in the supplementary material.
A THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a set of codebooks C = {ck}Kk=1 ∈ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in data. We first endow a discrete distribution over the codebooks as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K . We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebooks C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (7)
where Px = 1N ∑N
n=1 δxn is the empirical data distribution and dx is a cost metric on the data space. Lemma A.1. (Lemma 3.1 in the main paper) Let C∗ = {c∗k}k , π
∗, and f∗d be the optimal solution of the OP in Eq. (7). Assume K < N , then C∗ = {c∗k}k , π
∗, and f∗d are also the optimal solution of the following OP:
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (8)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Proof of Lemma A.1
It is clear that
fd#Pc,π = K∑
k=1
πkδfd(ck).
Therefore, we reach the following OP:
min C,π min fd Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) . (9)
By using the Monge definition, we have
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = min
T :T#Px=fd#Pc,π Ex∼Px [dx (x, T (x))]
= 1
N min T :T#Px=fd#Pc,π N∑ n=1 dx (xn, T (xn)) .
Since T#Px = fd#Pc,π , T (xn) = fd (ck) for some k. Additionally, ∣∣T−1 (fd(ck))∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (xn) = fd(cσ(n)),∀i = 1, ..., N , we have σ ∈ Σπ . It follows that
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = 1
N min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) .
Finally, the the optimal solution of the OP in Eq. (7) is equivalent to
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) ,
which directly implies the conclusion. Theorem A.1. (Theorem 3.1 in the main paper) We can equivalently turn the optimization problem in (7) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (10)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
Proof of Theorem A.1
We first prove that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (11)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks. To this end, we prove that
Wdx (fd#Pc,π,Px) = min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (12)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks.
Let f̄e be a stochastic discrete encoder such that f̄e#Px = Pc,π (i.e., x ∼ Px and c ∼ f̄e (x) implies c ∼ Pc,π). We consider γd,c as the joint distribution of (x, c) with x ∼ Px and c ∼ f̄e (x). We also consider γfc,d as the joint distribution including (x, x′) ∼ γfc,d where x ∼ Px,c ∼ f̄e (x), and x′ = fd (c). This follows that γfc,d ∈ Γ (fd#Pc,π,Px) which admits fd#Pc,π and Px as its marginal distributions. We also have:
Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] = E(x,c)∼γd,c [dx (fd (c) , x)] (1) = E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
= Wdx (fd#Pc,π,Px) .
Note that we have the equality in (1) due to (id, fd)#γd,c = γfc,d.
Therefore, we reach
min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ≥ Wdx (fd#Pc,π,Px) .
Let γfc,d ∈ Γ (fd#Pc,π,Px). Let γfc,c ∈ Γ (fd#Pc,π,Pc,π) be a deterministic coupling such that c ∼ Pc,π and x = fd (c) imply (c, x) ∼ γc,fc. Using the gluing lemma (see Lemma 5.5 in Santambrogio (2015)), there exists a joint distribution α ∈ Γ (Pc,π, fd#Pc,π,Px) which admits γfc,d and γfc,c as the corresponding joint distributions. By denoting γd,c ∈ Γ (Px,Pc,π) as the marginal distribution of α over Px,Pc,π , we then have
E(x,x′)∼γfc,d [dx (x, x ′)] = E(c,x′,x)∼α [dx (x, x′)] = E(c,x)∼γd,c,x′=fd(c) [dx (x, x ′)]
= E(c,x)∼γd,c [dx (fd (c) , x)] = Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] . ≥ min
f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ,
where f̄e(x) = γd,c(· | x). This follows that
Wdx (fd#Pc,π,Px) = min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] .
This completes the proof for the equality in Eq. (12), which means that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (13)
We now further prove the above OP is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (14)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
It is obvious that the OP in (14) is special case of that in (13) when we limit to search for deterministic discrete encoders. Given the optimal solution C∗1, π∗1, f∗1d , and f̄ ∗1 e of the OP in (13), we show how to construct the optimal solution for the OP in (14). Let us construct C∗2 = C∗1, f∗2d = f ∗1 d . Given x ∼ Px, let us denote f̄∗2e (x) = argmincdx ( f∗2d (c) , x ) . Thus, f̄∗2e is a deterministic discrete encoder mapping data example x directly to the codebooks. We define π∗2k = Pr ( f̄∗2e (x) = ck : x ∼ Px ) , k = 1, ...,K, meaning that f̄∗2e #Px = Pc∗2,π∗2 . From the construction of f̄∗2e , we have
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
Furthermore, because C∗2, π∗2, f∗2d , andf̄ ∗2 e are also a feasible solution of the OP in (14), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≥ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
This means that Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] = Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] ,
and C∗2, π∗2, f∗2d , andf̄ ∗2 e are also the optimal solution of the OP in (14).
Additionally, f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the following OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (15)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codebook to fe (x) and the parameter λ > 0.
We now propose and prove the following lemma that is necessary for the proof of Theorem A.2. Lemma A.2. Consider C, π, fd, and fe as a feasible solution of the OP in (15). Let us denote f̄e(x) = argmincdz(fe(x)), c) = QC(x), then f̄e(x) is a Borel measurable function.
Proof of Lemma A.2.
We denote the set Ak on the latent space as
Ak = {z : dz(z, ck) < d(z, cj),∀j ̸= k} = {z : QC(z) = ck}. Ak is known as a Voronoi cell w.r.t. the metric dz . If we consider a continuous metric dz , Ak is a measurable set. Given a Borel measurable function B, we prove that f̄−1e (B) is a Borel measurable set on the data space.
Let B ∩ {c1, .., cK} = {ci1 , ..., cim}, we prove that f̄−1e (B) = ∪mj=1f−1e ( Aij ) . Indeed, take x ∈ f̄−1e (B), then f̄e(x) ∈ B, implying that f̄e(x) = QC(x) = cij for some j = 1, ...,m. This means that fe(x) ∈ Aij for some j = 1, ...,m. Therefore, we reach f̄−1e (B) ⊂ ∪mj=1f−1e ( Aij ) .
We now take x ∈ ∪mj=1f−1e ( Aij ) . Then fe(x) ∈ Aij for j = 1, ...,m, hence f̄e(x) = QC(x) = cij for some j = 1, ...,m. Thus, f̄e(x) ⊂ B or equivalently x ∈ f̄−1e (B), implying f̄−1e (B) ⊃ ∪mj=1f−1e ( Aij ) .
Finally, we reach f̄−1e (B) = ∪mj=1f−1e ( Aij ) , which concludes our proof because fe is a measurable function and Aij are measurable sets.
Theorem A.2. (Theorem 3.2 in the main paper) If we seek fd and fe in a family with infinite capacity (e.g., the space of all measurable functions), the three OPs of interest in (7, 10, and 15) are equivalent.
Proof of Theorem A.2.
Given the optimal solution C∗1, π∗1, f∗1d , and f ∗1 e of the OP in (15), we conduct the optimal solution for the OP in (10). Let us conduct C∗2 = C∗1, f∗2d = f ∗1 d . We next define f̄ ∗2 e (x) =
argmincdz ( f∗1e (x) , c ) = QC∗1 ( f∗1e (x) ) = QC∗2 ( f∗1e (x) ) . We prove that C∗2, π∗2, f∗2d , and f̄∗2e are optimal solution of the OP in (10). By this definition, we yield f̄ ∗2 e #Px = Pc∗2,π∗2 and
hence Wdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = 0. Therefore, we need to verify the following:
(i) f̄∗2e is a Borel-measurable function.
(ii) Given a feasible solution C, π, fd, and f̄e of (10), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (16)
We first prove (i). It is a direct conclusion because the application of Lemma A.2 to C∗1, π∗1, f∗1d , and f∗1e .
We next prove (ii). We further derive as Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] + λWdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗2 ( f∗1e (x) )) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )]
≤ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )] + λWdz ( f∗1e #Px,Pc∗1,π∗1 ) . (17)
Moreover, because f̄e#Px = Pc,π which is a discrete distribution over the set of codewords C, we obtain QC(f̄e(x)) = f̄e(x). Note that C, π, fd, and f̄e is also a feasible solution of (15) because f̄e is also a specific encoder mapping from the data space to the latent space, we achieve
Ex∼Px [ dx ( fd ( QC ( f̄e (x) )) , x )] + λWdz ( f̄e#Px,Pc,π ) ≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) ) , x ))] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) .
Noting that f̄e#Px = Pc,π and QC(f̄e(x)) = f̄e(x), we arrive at Ex∼Px [ dx ( fd ( f̄e (x) ) , x )]
≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) )) , x )] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) . (18)
Combining the inequalities in (17) and (18), we obtain Inequality (16) as
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (19)
This concludes our proof.
Corollary A.1. (Corollary 3.1 in the main paper) Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (15) given π and assume K < N , its optimal solution f∗e and C∗are also the optimal solution of the following OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (20)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Proof of Corollary A.1.
By the Monge definition, we have
Wdz (fe#Px,Pc,π) = Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = min
T :T#(fe#Px)=Pc,π Ez∼fe#Px [dz (z, T (z))]
= 1
N min T :T#(fe#Px)=Pc,π N∑ n=1 dz (fe (xn) , T (fe (xn))) .
Since T#(fe#Px) = Pc,π , T (fe (xn)) = ck for some k. Additionally, ∣∣T−1 (ck)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (fe (xn)) = cσ(n),∀i = 1, ..., N , we have σ ∈ Σπ . It also follows that
Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = 1
N min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) .
B VISUALIZATION OF RECONSTRUCTION RESULTS
Qualitative assessment: We present the reconstructed samples from FFHQ (high-resolution images) for qualitative evaluation. It can be clearly seen that the high-level semantic features of the input image and colors are better preserved with VQ-WAE than the baseline. Particularly, we notice that VQGAN often produces repeated artifact patterns in image synthesis (see the hair of man is second column in Figure 4) while VQ-WAE does not. This is because VQ-GAN is lack of diversity in the codebook, which will be further analyzed in Section 5.2.1. Consequently, the quantization operator embeds similar patches into the same quantization index and ignores the variance in these patches (e.g., VQ-GAN reconstructs the background in third column of Figure 4 as hair of woman).
C EXPERIMENTAL SETTINGS
C.1 VQ-MODEL
Implementation: For fair comparison, we utilize the same framework architecture and hyperparameters for both VQ-VAE and VQ-WAE. Specifically, we construct the VQ-VAE and VQ-WAE models as follows:
• For CIFAR10, MNIST and SVHN datasets, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 2 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For CelebA dataset, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 6 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For high-quality image dataset FFHQ, we utilize the well-known VQGAN framework Esser et al. (2021) as the baseline.
We only replace the regularization module of VQ-VAE i.e., two last terms of objective function: dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) by our proposed by Wasserstein regularization λWdz (fe#Px,Pc,π)) in Eq. (4) for VQ-WAE. Additional, we employ the POT library (Flamary et al., 2021) to compute WS distance for simplicity. However, our VQ-WAE does not require optimal transport map the from WS distance in (6) to update the model. Therefore, we can employ a wide range of speed-up algorithms to solve optimization problem (OP) in (6) such as Sinkhorn algorithm (Cuturi, 2013) or entropic regularized dual for (Genevay et al., 2016).
Hyper-parameters: following (Takida et al., 2022), we adopt the adam optimizer for training with: learning-rate is e−4, batch size of 32, embedding dimension of 64 and codebook size |C| = 512 for all datasets except FFHQ with embedding dimension of 256 and |C| = 1024. Finally, we train model for CIFAR10, MNIST, SVHN, FFHQ in 100 epoches and for CelebA in 70 epoches respectively.
C.2 GENERATION MODEL
Implementation: It is worth to noting that we employ the codebooks learned from reported VQmodels to extract codeword indices and we employ PixelCNN (Van den Oord et al., 2016) with the same setting for generation for all VQ-VAE, SQ-VAE and VQ-WAE. In particular, we feed PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space for CIAR10, MNIST, SVHN, and 16× 16 1-channel latent space for CelebA. Hyper-parameters: we adopt the adam optimizer for training with: learning-rate is 3e−4, batch size of 32. Finally, we PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space in 100 epoches. | 1. What is the main contribution of the paper on extending WAE to incorporate discrete representations?
2. What are the strengths and weaknesses of the proposed approach, particularly in its optimization problems and algorithm?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What is the significance of the added constraint in the modified problem solved by Algorithm 1, and how does it relate to codebook collapse?
5. How does the proof of Theorem 3.2 fail to establish the claimed equivalence between the optimization problems (1), (2), and (4)? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper extends the Wasserstein autoencoder (WAE) framework due to Tolstikhin et al. (2017) to incorporate discrete representations as considered by the vector quantized variational autoencoder (VQ-VAE) due to van den Oord et al. (2017). It begins with a theoretical development leading to the optimization problem (4) and then propose an approximate solution procedure in Algorithm 1. The performance of the proposed algorithm is tested on the standard benchmark, namely CIFAR10, MNIST, SVHN, and CelebA. Comparisons are made with VQ-VAE and SQ-VAE (Takida et al., 2022), a stochastic variant of the former. While for the traditional pixel- and patch-level metrics SQ-VAE exhibits better performances, VQ-WAE has merits in dataset-level metrics such as rFID, and Shannon entropy of the latent codeword distribution. The latter phenomenon can be understood as that VQ-WAE yields better codebook utilization and thus prevents codebook collapse problem.
Strengths And Weaknesses
Strengths
Application of the WAE framework to discrete representation learning is reasonably demonstrated with various evaluation metrics. The derivation of objective function (4) from the Wasserstein distance minimization problem (1) is natural at least intuitively.
Weaknesses
The major weakness is the gap between the theory and algorithm. The optimization problems (1), (2) and (4), which are claimed to be equivalent to each other in Theorem 3.2, are basically about fitting a
K
-support discrete latent distribution
∑
k
=
1
K
π
k
δ
c
k
to the data distribution by minimizing the Wasserstein distance between them. In particular, the mass
π
k
assigned to cordwood
c
k
is an optimization variable to be fitted. However, in the actual algorithm (Algorithm 1), it is fixed as
1
/
K
and only
c
k
's are fitted. So Algorithm 1 in fact solves a different problem from (4). This distinction is important since the so-called "codebook collapse" occurs precisely because some
π
k
is zero but is destined to be prevented if the proportion of codewords is forced to be uniform. In fact, that the optimal transport problem (1) is equivalent to the
K
-means clustering has been known at least since 1982 (Pollard, D., 1982. Quantization and the method of k-means. IEEE Transactions on Information theory, 28(2), pp.199-205). In light of this result, the modified problem that Algorithm 1 solves is the
K
-means clustering problem under the constraint that each (latent) cluster contains the same number of sample (latent) points. I do not think that this modified problem is necessarily better or more desirable than the original
K
-means problem (1). Suppose (1) is solved exactly and there is a codebook collapse. Then this means that strictly less than
K
codewords best represent the data. Is this bad? So the case that codebook collapse becomes problematic is when the algorithm is trapped in a bad local minimum. This is a problem of the algorithm, rather than the sin of codebook underutilization. In this sense, the role of the added constraint is to regularize the algorithm to avoid bad local minima. On the other hand, Takida et al. (2022) avoids the same problem by using stochastic quantization but not constraining cluster sizes.
The proof of Theorem 3.2 is incomplete. The proof only states that the (3)
≤
(4). The possibility of strict inequality cannot be ruled out because the constructed variables
(
C
∗
2
,
π
∗
2
,
f
d
∗
2
,
f
¯
e
∗
2
)
are only feasible for (3) but not necessarily optimal. Further, measurability of
f
¯
e
∗
2
needs to be proved.
The objective of Algorithm 1 is not differentiable because of the
arg
min
operator in Line 5 and the Wasserstein distance penalty (6). How this is put to the gradient learning pipeline is not stated. Is the gradient passing trick due to van den Oord et al. (2017) used to deal with the
arg
min
operator? Or the Gumbel-softmax trick? is (6) smoothed with entropy regularization? If the answer to the first and the third questions are yes (which I presume), then the real contribution of this paper appears to be replacing the last two terms in the VQ-VAE objective with (a smoothed version of) (6); see Sect. B.1.
Finally, I wonder what are the authors' opinion about some peaks of codewords in Fig. 2a when
|
C
|
is large in the MNIST dataset.
Clarity, Quality, Novelty And Reproducibility
Clarity
As mentioned, it is unclear how Algorithm 1 is implemented with backpropergation.
Quality
This paper is written in plain Ensligh and easy to follow most of the time. In Table 2, MSE is mentioned in the title but not included in the table.
Novelty
Theorem 3.1 is a classical result in
K
-means clustering and vector quantization. Theorem 3.2 is incorrect.
Reproducible
Code is attached for reproducibility. |
ICLR | Title
Vector Quantized Wasserstein Auto-Encoder
Abstract
Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks. Recent work on Vector Quantized Variational Auto-Encoder (VQ-VAE) has made substantial progress in this direction. However, this quantizes latent representations using the online k-means algorithm which suffers from poor initialization and non-stationary clusters. To strengthen the clustering quality for the latent representations, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE) intuitively developed based on the clustering viewpoint of Wasserstein (WS) distance. Specifically, we endow a discrete distribution over the codewords and learn a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution. Finally, we empirically evaluate our method on several well-known benchmarks, where it achieves better qualitative and quantitative performances than the baselines in terms of the codebook utilization and image reconstruction/generation.
1 INTRODUCTION
Learning compact yet expressive representations from large-scale and high-dimensional unlabeled data is an important and long-standing task in machine learning (Kingma & Welling, 2013; Chen et al., 2020; Chen & He, 2021; Zoph et al., 2020). Among many different kinds of methods, Variational Auto-Encoder (VAE) (Kingma & Welling, 2013) and its variants (Tolstikhin et al., 2017; Alemi et al., 2016; Higgins et al., 2016; Voloshynovskiy et al., 2019) have shown great success in unsupervised representation learning. Although these continuous representation learning methods have been applied successfully to various problems ranging from images (Pathak et al., 2016; Goodfellow et al., 2014; Kingma et al., 2016), video and audio (Reed et al., 2017; Oord et al., 2016; Kalchbrenner et al., 2017), in some contexts, input data are more naturally modeled and encoded as discrete symbols rather than continuous ones. For example, discrete representations are a natural fit for complex reasoning, planning and predictive learning (Van Den Oord et al., 2017). This motivates the need of learning discrete representations, preserving the insightful characteristics of input data.
Vector Quantization Variational Auto-Encoder (VQ-VAE) (Van Den Oord et al., 2017) is a pioneer generative model, which successfully combines the VAE framework with discrete latent representations. In particular, the vector quantized models learn a compact discrete representation using a deterministic encoder-decoder architecture in the first stage, and subsequently applied this highly compressed representation for various downstream tasks, examples including image generation (Esser et al., 2021), cross-modal translation (Kim et al., 2022), and image recognition (Yu et al., 2021). While VQ-VAE has been widely applied to representation learning in many areas (Henter et al., 2018; Baevski et al., 2020; Razavi et al., 2019; Kumar et al., 2019; Dieleman et al., 2018; Yan et al., 2021), it is known to suffer from codebook collapse, which has a low codebook usage, i.e. most of embedded latent vectors are quantized to just few discrete codewords, while the other codewords are rarely used, or dead, due to the poor initialization of the codebook, reducing the information capacity of the bottleneck (Roy et al., 2018; Takida et al., 2022; Yu et al., 2021).
To mitigate this issue, additional training heuristics were proposed, such as the exponential moving average (EMA) update (Van Den Oord et al., 2017; Razavi et al., 2019), soft expectation maximization (EM) update (Roy et al., 2018), codebook reset (Dhariwal et al., 2020; Williams et al., 2020). Notably, soft expectation maximization (EM) update (Roy et al., 2018) connects the EMA update
with an EM algorithm and softens the EM algorithm with a stochastic posterior. Codebook reset randomly reinitializes unused/low-used codewords to one of the encoder outputs (Dhariwal et al., 2020) or those near codewords of high usage Williams et al. (2020). Takida et al. (2022) suspects that deterministic quantization is the cause of codebook collapse and extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Additionally, WS distance has been applied successfully to generative models and continuous representation learning (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017) owing to its nice properties and rich theory. It is natural to ask: ”Can we take advantages of intuitive properties of the WS distance and its mature theory for learning highly compact yet expressive discrete representations?” Toward this question, in this paper, we develop solid theories by connecting the theory bodies and viewpoints of the WS distance, generative models, and deep discrete representation learning. In particular, a) we first endow a discrete distribution over the codebook and propose learning a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them; b) To devise a trainable algorithm, we develop Theorem 3.1 to equivalently turn the above WS minimization to push-forwarding the data to codeword distributions via minimizing a WS distance between ”the latent representation and codeword distributions”; c) More interestingly, our Corollary 3.1 proves that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated to a centroid. We argue and empirically demonstrate that using the clustering viewpoint of a WS distance to learn the codewords, we can obtain more controllable and better centroids than using a simple k-means as in VQ-VAE (cf. Sections 3.1 and 5.2).
Our method, called Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), applies the WS distance to learn a more controllable codebook, hence leading to an improvement in the codebook utilization. We conduct comprehensive experiments to demonstrate our key contributions by comparing with VQ-VAE (Van Den Oord et al., 2017) and SQ-VAE (Takida et al., 2022) (i.e., the recent work that can improve the codebook utilization). The experimental results show that our VQ-WAE can achieve better codebook utilization with higher codebook perplexity, hence leading to lower (compared with VQ-VAE) or comparable (compared with SQ-VAE) reconstruction error, with significantly lower reconstructed Fréchlet Inception Distance (FID) score (Heusel et al., 2017). Generally, a better quantizer in the stage-1 can naturally contribute to stage-2 downstream tasks (Yu et al., 2021; Zheng et al., 2022). To further demonstrate this, we conduct comprehensive experiments on four benchmark datasets for both unconditional and class-conditional generation tasks. The experimental results indicate that from the codebooks of our VQ-WAE, we can generate better images with lower FID scores.
2 VECTOR QUANTIZED VARIATIONAL AUTO-ENCODER
Given a training set D = {x1, ..., xN} ⊂ RV , VQ-VAE (Van Den Oord et al., 2017) aims at learning a codebook which is formed by set of codewords C = [ck] K k=1 ∈ RK×D on the latent space Z ∈ RD, an encoder fe to map the data examples to the codewords, and a decoder fd (i.e., q (x | z)) to reconstruct accurately the data examples from the codewords. Given a data example x, the encoder fe (i.e., p (z | x)) associates x to the codeword f̄e (x) = c defined as
c = argminkdz (fe (x) , ck) ,
where dz is a metric on the latent space.
The objective function of VQ-VAE is as follows: Ex∼Px [ dx ( fd ( f̄e (x) ) , x ) + dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) ] ,
where Px = 1N ∑N
n=1 δxn is the empirical data distribution, sg specifies stop gradient, dx is a cost metric on the data space, and β is set between 0.1 and 2.0 (Van Den Oord et al., 2017) and dz (fe (x) , C) = ∑ c∈C dz (fe (x) , c).
The purpose of VQ-VAE training is to form the latent representations in clusters and adjust the codewords to be the centroids of these clusters.
3 CONTROLLABLE CODEBOOKS WITH WASSERSTEIN QUANTIZATION
We present the theoretical development of our VQ-WAE framework which connects the viewpoints of the WS distance, generative models, and deep discrete representation learning in Section 3.1. Specifically, we propose to learn a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them (Figure 1a (Top)). We then turn the above WS minimization to push-forwarding the data to codeword distribution via minimizing a WS distance between ”the latent representation and codeword distributions” (Figure 1a (Bottom)). We prove that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated with a centroid (Figure 1b). Based on the theoretical development, we devise a practical algorithm for VQ-WAE in Section 3.2.
3.1 THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a codebook C = {ck}Kk=1 ⊂ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in the data. We first endow a discrete distribution over the codewords as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K−1 = {π′ ≥ 0 : ∥π′∥1 = 1}. We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebook C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (1)
where Px = 1N ∑N n=1 δxn is the empirical data distribution and dx is a cost metric on the data space.
We interpret the optimization problem (OP) in Eq. (1) as follows. Given a discrete distribution Pc,π on the codewords, we use the decoder fd to map the codebook C to the data space and consider Wdx (fd#Pc,Px) as the codebook-data distortion w.r.t. fd. We subsequently learn fd to minimize the codebook-data distortion given Pc,π and finally adjust the codebook C and π to minimize the optimal codebook-data distortion. To offer more intuition for the OP in Eq. (1), we introduce the following lemma.
Lemma 3.1. Let C∗ = {c∗k}k , π ∗, and f∗d be the optimal solution of the OP in Eq. (1). Assume K < N , then C∗ = {c∗k}k , π ∗, and f∗d are also the optimal solution of the following OP:
min fd min π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (2)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.1 Lemma 3.1 states that for the optimal solution C∗ = {c∗k} , π∗, and f∗d of the OP in Eq. (1), {f∗d (c∗k)} K k=1 become the optimal clustering centroids of the optimal clustering solution which minimizes the distortion. Inspired by Wasserstein Auto-Encoder (Tolstikhin et al., 2017), we establish the following theorem to engage the OP in (1) with the latent space. Theorem 3.1. We can equivalently turn the optimization problem in (1) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (3)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebook.
First, we learn both the codebook C and the weights π. Second, ours seeks a deterministic discrete encoder f̄e mapping data example x directly to a codeword, concurring with vector quantization and serving our further derivations, whereas Theorem 1 in Tolstikhin et al. (2017) involves a probabilistic/stochastic encoder mapping to a continuous latent distribution (i.e., a larger space to search). More importantly, our proof is totally different from that in Tolstikhin et al. (2017) (all proof details are given in Appendix A).
Additionally, f̄e is a deterministic discrete encoder mapping a data example x directly to a codeword. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (4)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codeword to fe (x) and the parameter λ > 0.
Particularly, we can rigorously prove that the two optimization problems of interest in (3) and (4) are equivalent under some mild conditions in Theorem 3.2. This rationally explains why we could solve the OP in (4) for our final tractable solution. Theorem 3.2. If we seek fd and fe in a family with infinite capacity (e.g., the family of all measurable functions), the three OPs of interest in (1, 3, and 4) are equivalent.
Moreover, the OP in (4) conveys important meaningful interpretations. Specifically, by minimizing Wdz (fe#Px,Pc,π) w.r.t. C, π, we aim to learn the codewords that are clustering centroids of fe#Px according to the clustering viewpoint of OT as shown in Corollary 3.1, and similar to VQ-VAE, we quantize fe (x) to the closest codeword using QC (fe (x)) = argminc∈Cdz (fe (x) , c) and try to reconstruct x from this codebook. Corollary 3.1. Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (4) given π and assume K < N , its optimal solution f∗e and C ∗are also the optimal solution of the OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (5)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Corollary 3.1 indicates the aim of minimizing the second term Wdz (fe#Px,Pc,π) in (4). By which, we adjust the encoder fe and the codebook C such that the codewords of C become the clustering
1E.g., σ is the nearest assignment: σ−1 (k) = {f̄e (x) = ck | k = argminkdz (fe (x) , ck)} is set of latent representations which are quantized to kth codeword.
centroids of the latent representations {fe (xn)}n to minimize the codebook-latent distortion (see Figure 1 (Right)). Additionally, at the optimal solution, the optimal assignment function σ∗, which indicates how latent representations (or data examples) associated with the clustering centroids (i.e., the codewords) has a valuable property, i.e., the cardinalities
∣∣(σ∗)−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Remark: Recall the codebook collapse issue, i.e. most of embedded latent vectors are quantized to just few discrete codewords while the other codewords are rarely used. Corollary 3.1 give us important properties: (1) we can control the number of latent representations assigned to each codeword by adjust π, guaranteeing all codewords are utilized, (2) codewords become the clustering centroids of the associated latent representations to minimize the codebook-latent distortion, to develop our VQ-WAE framework.
3.2 PROPOSED FRAMEWORK
One of crucial aims of learning meaningful and well-distributed codewords is to make use of each individual codeword efficiently by solving the OP in (4). Specifically, we wish the latent representations are more uniformly associated with the codewords. Based on Corollary 3.1, pointing out that the numbers of latent representations associated with the kthcodeword is proportional to πk, we hence fix π as a uniform distribution (i.e., Pc,π = ∑K k=1 1 K δck ) to make all the codewords utilized equally by the model, hence boosting the perplexity or the codebook usage.
We now present the practical method based on the OP in (4) with Pc,π = ∑K k=1 1 K δck . At each iteration, we sample a mini-batch x1, ..., xB and then solve the OP in (4) by updating fd, fe and C based on this mini-batch as follows. Let us denote Pb = 1B ∑B j=i δxi as the empirical distribution of embedded vectors. over the current batch. Basically, we learn the optimal transportation plan P ∗ by solving:
Wdz (fe#Pb,Pc,π) = min P∈Γ(1B ,1C) ⟨P,Dc,x⟩ , (6) where 1B = [ 1 B ] B is the vector of atom masses of Pb, 1C = [ 1 C ] C
is the vector of atom masses of Pc,π , Γ (1B , 1C) is the set of feasible transportation plans, and Dc,x = [dz (xi, ck)]i,k ∈ RB×K is the cost matrix.
The pseudcode of our VQ-WAE is summarized in Algorithm 1. We use the copy gradient trick (Van Den Oord et al., 2017) to deal with the back-propagation from decoder to encoder for reconstruction term while Wasserstein regularization term Wdz (fe#Pb,Pc,π) can be optimized directly without further manipulation. Additionally, Wdz (fe#Pb,Pc,π) term is only utilized in the training phase.
Algorithm 1 VQ-WAE 1: Initialize: encoder fe, decoder fd and codebook C. 2: for iter in iterations do 3: Sample a mini-batch of samples x1, ..., xB forming the empirical batch distribution Pb 4: Encode: zi→B = fe(xi→B) // i → B : for i = 1, ..., B 5: Quantize: ci→B = argmink dz (zi→B , ck) // Nearest neighbor assignment 6: Decode: x̃i→B = fd(ci→B) 7: Optimize fe, fd and C by minimizing the objective in (4):
1
B B∑ i=1
[dx (x̃i, xi)] + λ Wdz (fe#Pb,Pc,π)︸ ︷︷ ︸ minP∈Γ(1B,1C) ⟨P,Dc,x⟩
8: end for 9: Return: The optimal fe, fd and C.
4 RELATED WORK
Variational Auto-Encoder (VAE) was first introduced by Kingma & Welling (Kingma & Welling, 2013) for learning continuous representations. However, learning discrete latent representations has
proved much more challenging because it is nearly impossible to accurately evaluate the gradients which are required to train models. To make the gradients tractable, one possible solution is to apply the Gumbel Softmax reparameterization trick (Jang et al., 2016) to VAE, which allows us to estimate stochastic gradients for updating the models. Although this technique has a low variance, it brings up a high-bias gradient estimator. Another possible solution is to employ the REINFORCE algorithm (Williams, 1992), which is unbiased but has a high variance. Additionally, the two techniques can be complementarily combined (Tucker et al., 2017).
To enable learning the discrete latent codes, VQ-VAE (Van Den Oord et al., 2017) uses deterministic encoder/decoder and encourages the codebooks to become the clustering centroids of latent representations. Additionally, the copy gradient trick is employed in back-propagating gradients from the decoder to the encoder (Bengio, 2013). Some further works were proposed to extend VQ-VAE, notably (Roy et al., 2018; Wu & Flierl, 2020). Particularly, Roy et al. (2018) uses the Expectation Maximization (EM) algorithm in the bottleneck stage to train the VQ-VAE for improving the quality of the generated images. However, to maintain the stability of this approach, we need to collect a large number of samples on the latent space. Wu & Flierl (2020) imposes noises on the latent codes and uses a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder and performs soft quantization of the noisy latent codes which have latent representations preserving the similarity relations of the data space. Recently, Takida et al. (2022) extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Wasserstein (WS) distance has been widely used in generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017). Arjovsky et al. Arjovsky et al. (2017) uses a dual form of WS distance to develop Wasserstein generative adversarial network (WGAN). Later, Gulrajani et al. (2017) employs the gradient penalty trick to improve the stability of WGAN. In terms of theory development, mostly related to our work is Wasserstein Auto-Encoder (Tolstikhin et al., 2017) which aims to learn continuous latent representation preserving the characteristics of input data.
5 EXPERIMENTS
In this section, we conduct extensive experiments to show the effectiveness of our proposed method compared to other advances.
Datasets: we empirically evaluate the proposed VQ-WAE in comparison with VQ-VAE (Van Den Oord et al., 2017) that is the baseline method and recently proposed SQ-VAE (Takida et al., 2022) which is the state-of-the-art work of improving the codebook usage, on four different benchmark datasets: CIFAR10 (Van Den Oord et al., 2017), MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CelebA (Liu et al., 2015) and the high-resolution images dataset FFHQ Karras et al. (2019).
Implementation: For a fair comparison, we utilize the same architectures and hyper-parameters for all methods. Additionally, in the primary setting, we use the codeword (discrete latent) dimensionality of 64 and codebook size |C| = 512 for all datasets except FFHQ with codeword dimensionality of 256 and |C| = 1024, while the hyper-parameters {β, τ, λ} are specified as presented in the original papers, i.e., β = 0.25 for VQ-VAE and VQ-GAN (Esser et al., 2021), τ = 1e−5 for SQ-VAE and λ = 1 for our VQ-WAE. The details of the experimental settings are presented in Appendix C.
5.1 RESULTS ON BENCHMARK DATASETS
In order to quantitatively assess the quality of the reconstructed images, we report the results on most common evaluation metrics, including the pixel-level peak signal-to-noise ratio (PSNR), patchlevel structure similarity index (SSIM), feature-level LPIPS (Zhang et al., 2018), and dataset-level Fréchlet Inception Distance (FID) (Heusel et al., 2017). We report the test-set reconstruction results on four datasets in Table 1. With regard to the codebook utilization, we employ perplexity score which is defined as e− ∑K k=1 pck log pck where pck =
Nck∑K i=1 Nci (i.e., Nci is the number of latent
representations associated with the codeword ci) is the probability of the ith codeword being used. Note that by formula, perplexitymax = |C| as P (c) becomes to the uniform distribution, which means that all the codewords are utilized equally by the model.
We compare VQ-WAE with VQ-VAE, SQ-VAE and VQ-GAN for image reconstruction in Table 1. All instantiations of our model significantly outperform the baseline VQ-VAE under the same compression ratio, with the same network architecture. While the latest state-of-the-art SQ-VAE holds slightly better scores for traditional pixel- and patch-level metrics, our method achieves much better rFID scores which evaluate the image quality at the dataset level. Note that our VQ-WAE significantly improves the perplexity of the learned codebook. This suggests that the proposed method significantly improves the codebook usage, resulting in better reconstruction quality. Finally, to complete the assessment, the qualitative results are visualized in Figure 4 (Appendix B).
5.2 DETAILED ANALYSIS
We run a number of ablations to analyze the properties of VQ-VAE, SQ-VAE and VQ-WAE, in order to assess if our VQ-WAE can simultaneously achieve (i) efficient codebook usage, (ii) reasonable latent representation.
5.2.1 CODEBOOK USAGE
We observe the codebook utilization of three methods with different codebook sizes {64, 128, 256, 512} on MNIST and CIFAR10 datasets. Particularly, we present the reconstruction performance for different settings in Table 2 and the histogram of latent representations over the codebook in Figure 2.
As discussed in Section 3.1 and Section 3.2, the number of used centroids reflects the capability of the latent representations. In other words, it represents the certain amount of information is pre-
served in the latent space. By explicitly defining the numbers of latent representations associated with the codebooks to be uniform (i.e., fixing π in (4) as a uniform distribution) in the Wasserstein regularization term, VQ-WAE is able to maximize the information in the codebooks, hence improving the reconstruction capacity. It can be seen from Figure 2 that the latent distribution of VQ-WAE over the codebook is nearly uniform and the codebook’s perplexity almost reaches the optimal value (i.e., the value of perplexities reach to corresponding codebook sizes) in different settings. It is also observed that as the size of the codebook increases, the perplexity of codebook of VQ-WAE also increases, leading to the better reconstruction performance (Table 2), in line with the analysis in (Wu & Flierl, 2018). SQ-VAE also has good codebook utilization as its perplexity is proportional to the size of the codebook. However, its codebook utilization becomes less efficient when the codebook size becomes large, especially in low texture dataset (i.e., MNIST).
On the contrary, the codebook usage of VQ-VAE is less efficient, i.e., there are many zero entries in its codebook usage histogram, indicating that some codewords have never been used (Figure 2). Furthermore, Table 2 also shows the instability of VQ-VAE’s reconstruction performance with different codebook sizes.
5.2.2 VISUALIZATION OF LATENT REPRESENTATION
To better understand the codebook’s representation power, we employ t-SNE (van der Maaten & Hinton, 2008) to visualize the latent representations that have been learned by VQ-VAE, SQ-VAE and VQ-WAE on the MNIST dataset with two codebook sizes of 64 and 512. Figure 3 shows the latent distributions of different classes in the latent space, in which the samples are colored accordingly to their class labels. Figure 3c shows that representations from different classes of VQWAE are well clustered (i.e., each class focuses on only one cluster) and clearly separated to other classes. In contrast, the representations of some classes in VQ-VAE and SQ-VAE are distributed to several clusters and or mixed to each other (Figure 3a,b). Moreover, the class-clusters of SQ-VAE are uncondensed and tend to overlap with each other. These results suggest that the representations learned by VQ-WAE can better preserve the similarity relations of the data space better than the other models.
5.2.3 IMAGE GENERATION
As discussed in the previous section, VQ-WAE is able to optimally utilize its codebook, leading to meaningful and diverse codewords that naturally improve the image generation. To confirm this ability, we perform the image generation on the benchmark datasets. Since the decoder reconstructs images directly from the discrete embeddings, we only need to model a prior distribution over the discrete latent space (i.e., codebook) to generate images.
We employ a conventional autoregressive model, the CNN-based PixelCNN (Van den Oord et al., 2016), to estimate a prior distribution over the discrete latent space of VQ-VAE, SQ-VAE and VQWAE on CIFAR10, MNIST, SVHN and CelebA. The details of generation settings are presented
in Section 3.2 of the supplementary material. The quantitative results in Table 3 indicate that the codebook of VQ-WAE leads to a better generation ability than VQ-VAE and SQ-VAE.
6 CONCLUSION
In this paper, inspired by the nice properties and mature theory of the WS distance allowing it to be applied successfully to generative models and continous representation learning, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), which endows a discrete distribution over the codewords and learns a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We then developed theoretical analysis to show the equivalence of this WS minimization to another OP regarding push-forwarding the data distribution to the codeword distribution, which can be realized by minimizing a WS distance between the latent representation and codeword distributions. We conduct comprehensive experiments to show that our VQ-WAE utilizes the codebooks more efficiently than the baselines, hence leading to better reconstructed and generated image quality.
7 REPRODUCIBILITY STATEMENT
We provide the implementation of our framework in the supplementary material.
A THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a set of codebooks C = {ck}Kk=1 ∈ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in data. We first endow a discrete distribution over the codebooks as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K . We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebooks C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (7)
where Px = 1N ∑N
n=1 δxn is the empirical data distribution and dx is a cost metric on the data space. Lemma A.1. (Lemma 3.1 in the main paper) Let C∗ = {c∗k}k , π
∗, and f∗d be the optimal solution of the OP in Eq. (7). Assume K < N , then C∗ = {c∗k}k , π
∗, and f∗d are also the optimal solution of the following OP:
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (8)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Proof of Lemma A.1
It is clear that
fd#Pc,π = K∑
k=1
πkδfd(ck).
Therefore, we reach the following OP:
min C,π min fd Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) . (9)
By using the Monge definition, we have
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = min
T :T#Px=fd#Pc,π Ex∼Px [dx (x, T (x))]
= 1
N min T :T#Px=fd#Pc,π N∑ n=1 dx (xn, T (xn)) .
Since T#Px = fd#Pc,π , T (xn) = fd (ck) for some k. Additionally, ∣∣T−1 (fd(ck))∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (xn) = fd(cσ(n)),∀i = 1, ..., N , we have σ ∈ Σπ . It follows that
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = 1
N min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) .
Finally, the the optimal solution of the OP in Eq. (7) is equivalent to
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) ,
which directly implies the conclusion. Theorem A.1. (Theorem 3.1 in the main paper) We can equivalently turn the optimization problem in (7) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (10)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
Proof of Theorem A.1
We first prove that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (11)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks. To this end, we prove that
Wdx (fd#Pc,π,Px) = min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (12)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks.
Let f̄e be a stochastic discrete encoder such that f̄e#Px = Pc,π (i.e., x ∼ Px and c ∼ f̄e (x) implies c ∼ Pc,π). We consider γd,c as the joint distribution of (x, c) with x ∼ Px and c ∼ f̄e (x). We also consider γfc,d as the joint distribution including (x, x′) ∼ γfc,d where x ∼ Px,c ∼ f̄e (x), and x′ = fd (c). This follows that γfc,d ∈ Γ (fd#Pc,π,Px) which admits fd#Pc,π and Px as its marginal distributions. We also have:
Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] = E(x,c)∼γd,c [dx (fd (c) , x)] (1) = E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
= Wdx (fd#Pc,π,Px) .
Note that we have the equality in (1) due to (id, fd)#γd,c = γfc,d.
Therefore, we reach
min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ≥ Wdx (fd#Pc,π,Px) .
Let γfc,d ∈ Γ (fd#Pc,π,Px). Let γfc,c ∈ Γ (fd#Pc,π,Pc,π) be a deterministic coupling such that c ∼ Pc,π and x = fd (c) imply (c, x) ∼ γc,fc. Using the gluing lemma (see Lemma 5.5 in Santambrogio (2015)), there exists a joint distribution α ∈ Γ (Pc,π, fd#Pc,π,Px) which admits γfc,d and γfc,c as the corresponding joint distributions. By denoting γd,c ∈ Γ (Px,Pc,π) as the marginal distribution of α over Px,Pc,π , we then have
E(x,x′)∼γfc,d [dx (x, x ′)] = E(c,x′,x)∼α [dx (x, x′)] = E(c,x)∼γd,c,x′=fd(c) [dx (x, x ′)]
= E(c,x)∼γd,c [dx (fd (c) , x)] = Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] . ≥ min
f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ,
where f̄e(x) = γd,c(· | x). This follows that
Wdx (fd#Pc,π,Px) = min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] .
This completes the proof for the equality in Eq. (12), which means that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (13)
We now further prove the above OP is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (14)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
It is obvious that the OP in (14) is special case of that in (13) when we limit to search for deterministic discrete encoders. Given the optimal solution C∗1, π∗1, f∗1d , and f̄ ∗1 e of the OP in (13), we show how to construct the optimal solution for the OP in (14). Let us construct C∗2 = C∗1, f∗2d = f ∗1 d . Given x ∼ Px, let us denote f̄∗2e (x) = argmincdx ( f∗2d (c) , x ) . Thus, f̄∗2e is a deterministic discrete encoder mapping data example x directly to the codebooks. We define π∗2k = Pr ( f̄∗2e (x) = ck : x ∼ Px ) , k = 1, ...,K, meaning that f̄∗2e #Px = Pc∗2,π∗2 . From the construction of f̄∗2e , we have
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
Furthermore, because C∗2, π∗2, f∗2d , andf̄ ∗2 e are also a feasible solution of the OP in (14), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≥ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
This means that Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] = Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] ,
and C∗2, π∗2, f∗2d , andf̄ ∗2 e are also the optimal solution of the OP in (14).
Additionally, f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the following OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (15)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codebook to fe (x) and the parameter λ > 0.
We now propose and prove the following lemma that is necessary for the proof of Theorem A.2. Lemma A.2. Consider C, π, fd, and fe as a feasible solution of the OP in (15). Let us denote f̄e(x) = argmincdz(fe(x)), c) = QC(x), then f̄e(x) is a Borel measurable function.
Proof of Lemma A.2.
We denote the set Ak on the latent space as
Ak = {z : dz(z, ck) < d(z, cj),∀j ̸= k} = {z : QC(z) = ck}. Ak is known as a Voronoi cell w.r.t. the metric dz . If we consider a continuous metric dz , Ak is a measurable set. Given a Borel measurable function B, we prove that f̄−1e (B) is a Borel measurable set on the data space.
Let B ∩ {c1, .., cK} = {ci1 , ..., cim}, we prove that f̄−1e (B) = ∪mj=1f−1e ( Aij ) . Indeed, take x ∈ f̄−1e (B), then f̄e(x) ∈ B, implying that f̄e(x) = QC(x) = cij for some j = 1, ...,m. This means that fe(x) ∈ Aij for some j = 1, ...,m. Therefore, we reach f̄−1e (B) ⊂ ∪mj=1f−1e ( Aij ) .
We now take x ∈ ∪mj=1f−1e ( Aij ) . Then fe(x) ∈ Aij for j = 1, ...,m, hence f̄e(x) = QC(x) = cij for some j = 1, ...,m. Thus, f̄e(x) ⊂ B or equivalently x ∈ f̄−1e (B), implying f̄−1e (B) ⊃ ∪mj=1f−1e ( Aij ) .
Finally, we reach f̄−1e (B) = ∪mj=1f−1e ( Aij ) , which concludes our proof because fe is a measurable function and Aij are measurable sets.
Theorem A.2. (Theorem 3.2 in the main paper) If we seek fd and fe in a family with infinite capacity (e.g., the space of all measurable functions), the three OPs of interest in (7, 10, and 15) are equivalent.
Proof of Theorem A.2.
Given the optimal solution C∗1, π∗1, f∗1d , and f ∗1 e of the OP in (15), we conduct the optimal solution for the OP in (10). Let us conduct C∗2 = C∗1, f∗2d = f ∗1 d . We next define f̄ ∗2 e (x) =
argmincdz ( f∗1e (x) , c ) = QC∗1 ( f∗1e (x) ) = QC∗2 ( f∗1e (x) ) . We prove that C∗2, π∗2, f∗2d , and f̄∗2e are optimal solution of the OP in (10). By this definition, we yield f̄ ∗2 e #Px = Pc∗2,π∗2 and
hence Wdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = 0. Therefore, we need to verify the following:
(i) f̄∗2e is a Borel-measurable function.
(ii) Given a feasible solution C, π, fd, and f̄e of (10), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (16)
We first prove (i). It is a direct conclusion because the application of Lemma A.2 to C∗1, π∗1, f∗1d , and f∗1e .
We next prove (ii). We further derive as Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] + λWdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗2 ( f∗1e (x) )) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )]
≤ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )] + λWdz ( f∗1e #Px,Pc∗1,π∗1 ) . (17)
Moreover, because f̄e#Px = Pc,π which is a discrete distribution over the set of codewords C, we obtain QC(f̄e(x)) = f̄e(x). Note that C, π, fd, and f̄e is also a feasible solution of (15) because f̄e is also a specific encoder mapping from the data space to the latent space, we achieve
Ex∼Px [ dx ( fd ( QC ( f̄e (x) )) , x )] + λWdz ( f̄e#Px,Pc,π ) ≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) ) , x ))] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) .
Noting that f̄e#Px = Pc,π and QC(f̄e(x)) = f̄e(x), we arrive at Ex∼Px [ dx ( fd ( f̄e (x) ) , x )]
≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) )) , x )] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) . (18)
Combining the inequalities in (17) and (18), we obtain Inequality (16) as
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (19)
This concludes our proof.
Corollary A.1. (Corollary 3.1 in the main paper) Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (15) given π and assume K < N , its optimal solution f∗e and C∗are also the optimal solution of the following OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (20)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Proof of Corollary A.1.
By the Monge definition, we have
Wdz (fe#Px,Pc,π) = Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = min
T :T#(fe#Px)=Pc,π Ez∼fe#Px [dz (z, T (z))]
= 1
N min T :T#(fe#Px)=Pc,π N∑ n=1 dz (fe (xn) , T (fe (xn))) .
Since T#(fe#Px) = Pc,π , T (fe (xn)) = ck for some k. Additionally, ∣∣T−1 (ck)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (fe (xn)) = cσ(n),∀i = 1, ..., N , we have σ ∈ Σπ . It also follows that
Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = 1
N min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) .
B VISUALIZATION OF RECONSTRUCTION RESULTS
Qualitative assessment: We present the reconstructed samples from FFHQ (high-resolution images) for qualitative evaluation. It can be clearly seen that the high-level semantic features of the input image and colors are better preserved with VQ-WAE than the baseline. Particularly, we notice that VQGAN often produces repeated artifact patterns in image synthesis (see the hair of man is second column in Figure 4) while VQ-WAE does not. This is because VQ-GAN is lack of diversity in the codebook, which will be further analyzed in Section 5.2.1. Consequently, the quantization operator embeds similar patches into the same quantization index and ignores the variance in these patches (e.g., VQ-GAN reconstructs the background in third column of Figure 4 as hair of woman).
C EXPERIMENTAL SETTINGS
C.1 VQ-MODEL
Implementation: For fair comparison, we utilize the same framework architecture and hyperparameters for both VQ-VAE and VQ-WAE. Specifically, we construct the VQ-VAE and VQ-WAE models as follows:
• For CIFAR10, MNIST and SVHN datasets, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 2 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For CelebA dataset, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 6 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For high-quality image dataset FFHQ, we utilize the well-known VQGAN framework Esser et al. (2021) as the baseline.
We only replace the regularization module of VQ-VAE i.e., two last terms of objective function: dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) by our proposed by Wasserstein regularization λWdz (fe#Px,Pc,π)) in Eq. (4) for VQ-WAE. Additional, we employ the POT library (Flamary et al., 2021) to compute WS distance for simplicity. However, our VQ-WAE does not require optimal transport map the from WS distance in (6) to update the model. Therefore, we can employ a wide range of speed-up algorithms to solve optimization problem (OP) in (6) such as Sinkhorn algorithm (Cuturi, 2013) or entropic regularized dual for (Genevay et al., 2016).
Hyper-parameters: following (Takida et al., 2022), we adopt the adam optimizer for training with: learning-rate is e−4, batch size of 32, embedding dimension of 64 and codebook size |C| = 512 for all datasets except FFHQ with embedding dimension of 256 and |C| = 1024. Finally, we train model for CIFAR10, MNIST, SVHN, FFHQ in 100 epoches and for CelebA in 70 epoches respectively.
C.2 GENERATION MODEL
Implementation: It is worth to noting that we employ the codebooks learned from reported VQmodels to extract codeword indices and we employ PixelCNN (Van den Oord et al., 2016) with the same setting for generation for all VQ-VAE, SQ-VAE and VQ-WAE. In particular, we feed PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space for CIAR10, MNIST, SVHN, and 16× 16 1-channel latent space for CelebA. Hyper-parameters: we adopt the adam optimizer for training with: learning-rate is 3e−4, batch size of 32. Finally, we PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space in 100 epoches. | 1. What is the main contribution of the paper regarding vector quantization?
2. What are the strengths and weaknesses of the proposed approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions regarding the paper's organization, writing style, and technical details? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a vector quantized autoencoder that minimizes the Wasserstein distance between the encoded samples and the code word. In short, the loss function is simply the quantized autoencoding loss plus a penalty term that measures the Wasserstein distance between the empirical distribution of the encoded samples and the empirical distribution of the code words. The authors then evaluate their approach on the generative modeling for CIFAR10, MNIST, SVHN, and CelebA datasets showing competitive performance compared to VQ-VAE and SQ-VAEs.
Strengths And Weaknesses
Strengths
The idea of using Wasserstein distance for vector quantization is interesting.
Weaknesses
The paper is not well-written, and the flow of the paper can significantly benefit from a significant rewriting.
The paper's novelty is limited to applying the Wasserstein distance in the latent space of an autoencoder with a moving sparse target distribution (i.e., the codeword distribution).
Given the deterministic nature of both the encoder and decoder and applying the Wasserstein distance in the embedding space, the paper is more closely related to the Sliced-Wasserstein Auto-Encoders (SWAE), Kolouri, et al. 2018, as opposed to the WAE that uses probabilistic encoder/decoders.
The choice of uniform
π
s is not well justified in the paper. Also, the authors may want to use a different symbol for
π
, because
π
is often used to denote the `transport plan' in the optimal transport-related literature and in calculation of the Wasserstein distance.
The stop gradient used in VQ-VAE could also be used in your formulation. In particular, if you freeze the encoder, the optimal codeword could be directly obtained from the barycentric projection of the transportation plan. Some discussions on these aspects of the training could improve the quality of the paper.
Eq (3) is missing a
f
d
for the inner minimization.
The paragraphs following Eq (3) state
f
¯
e
is deterministic repeatedly. There are several instances of repeated reworded statements throughout the paper, which flags that the paper could use significant polishing to improve the flow and communicate the concepts more concisely.
Clarity, Quality, Novelty And Reproducibility
The paper could be reorganized and rewritten to increase clarity and conciseness.
The proposed approach lacks novelty and originality. Moreover, the various choices, like dropping the stop gradient or assuming that the codewords have uniform weights, are not very well justified in the paper. |
ICLR | Title
Vector Quantized Wasserstein Auto-Encoder
Abstract
Learning deep discrete latent presentations offers a promise of better symbolic and summarized abstractions that are more useful to subsequent downstream tasks. Recent work on Vector Quantized Variational Auto-Encoder (VQ-VAE) has made substantial progress in this direction. However, this quantizes latent representations using the online k-means algorithm which suffers from poor initialization and non-stationary clusters. To strengthen the clustering quality for the latent representations, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE) intuitively developed based on the clustering viewpoint of Wasserstein (WS) distance. Specifically, we endow a discrete distribution over the codewords and learn a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We develop further theories to connect it with the clustering viewpoint of WS distance, allowing us to have a better and more controllable clustering solution. Finally, we empirically evaluate our method on several well-known benchmarks, where it achieves better qualitative and quantitative performances than the baselines in terms of the codebook utilization and image reconstruction/generation.
1 INTRODUCTION
Learning compact yet expressive representations from large-scale and high-dimensional unlabeled data is an important and long-standing task in machine learning (Kingma & Welling, 2013; Chen et al., 2020; Chen & He, 2021; Zoph et al., 2020). Among many different kinds of methods, Variational Auto-Encoder (VAE) (Kingma & Welling, 2013) and its variants (Tolstikhin et al., 2017; Alemi et al., 2016; Higgins et al., 2016; Voloshynovskiy et al., 2019) have shown great success in unsupervised representation learning. Although these continuous representation learning methods have been applied successfully to various problems ranging from images (Pathak et al., 2016; Goodfellow et al., 2014; Kingma et al., 2016), video and audio (Reed et al., 2017; Oord et al., 2016; Kalchbrenner et al., 2017), in some contexts, input data are more naturally modeled and encoded as discrete symbols rather than continuous ones. For example, discrete representations are a natural fit for complex reasoning, planning and predictive learning (Van Den Oord et al., 2017). This motivates the need of learning discrete representations, preserving the insightful characteristics of input data.
Vector Quantization Variational Auto-Encoder (VQ-VAE) (Van Den Oord et al., 2017) is a pioneer generative model, which successfully combines the VAE framework with discrete latent representations. In particular, the vector quantized models learn a compact discrete representation using a deterministic encoder-decoder architecture in the first stage, and subsequently applied this highly compressed representation for various downstream tasks, examples including image generation (Esser et al., 2021), cross-modal translation (Kim et al., 2022), and image recognition (Yu et al., 2021). While VQ-VAE has been widely applied to representation learning in many areas (Henter et al., 2018; Baevski et al., 2020; Razavi et al., 2019; Kumar et al., 2019; Dieleman et al., 2018; Yan et al., 2021), it is known to suffer from codebook collapse, which has a low codebook usage, i.e. most of embedded latent vectors are quantized to just few discrete codewords, while the other codewords are rarely used, or dead, due to the poor initialization of the codebook, reducing the information capacity of the bottleneck (Roy et al., 2018; Takida et al., 2022; Yu et al., 2021).
To mitigate this issue, additional training heuristics were proposed, such as the exponential moving average (EMA) update (Van Den Oord et al., 2017; Razavi et al., 2019), soft expectation maximization (EM) update (Roy et al., 2018), codebook reset (Dhariwal et al., 2020; Williams et al., 2020). Notably, soft expectation maximization (EM) update (Roy et al., 2018) connects the EMA update
with an EM algorithm and softens the EM algorithm with a stochastic posterior. Codebook reset randomly reinitializes unused/low-used codewords to one of the encoder outputs (Dhariwal et al., 2020) or those near codewords of high usage Williams et al. (2020). Takida et al. (2022) suspects that deterministic quantization is the cause of codebook collapse and extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Additionally, WS distance has been applied successfully to generative models and continuous representation learning (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017) owing to its nice properties and rich theory. It is natural to ask: ”Can we take advantages of intuitive properties of the WS distance and its mature theory for learning highly compact yet expressive discrete representations?” Toward this question, in this paper, we develop solid theories by connecting the theory bodies and viewpoints of the WS distance, generative models, and deep discrete representation learning. In particular, a) we first endow a discrete distribution over the codebook and propose learning a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them; b) To devise a trainable algorithm, we develop Theorem 3.1 to equivalently turn the above WS minimization to push-forwarding the data to codeword distributions via minimizing a WS distance between ”the latent representation and codeword distributions”; c) More interestingly, our Corollary 3.1 proves that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated to a centroid. We argue and empirically demonstrate that using the clustering viewpoint of a WS distance to learn the codewords, we can obtain more controllable and better centroids than using a simple k-means as in VQ-VAE (cf. Sections 3.1 and 5.2).
Our method, called Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), applies the WS distance to learn a more controllable codebook, hence leading to an improvement in the codebook utilization. We conduct comprehensive experiments to demonstrate our key contributions by comparing with VQ-VAE (Van Den Oord et al., 2017) and SQ-VAE (Takida et al., 2022) (i.e., the recent work that can improve the codebook utilization). The experimental results show that our VQ-WAE can achieve better codebook utilization with higher codebook perplexity, hence leading to lower (compared with VQ-VAE) or comparable (compared with SQ-VAE) reconstruction error, with significantly lower reconstructed Fréchlet Inception Distance (FID) score (Heusel et al., 2017). Generally, a better quantizer in the stage-1 can naturally contribute to stage-2 downstream tasks (Yu et al., 2021; Zheng et al., 2022). To further demonstrate this, we conduct comprehensive experiments on four benchmark datasets for both unconditional and class-conditional generation tasks. The experimental results indicate that from the codebooks of our VQ-WAE, we can generate better images with lower FID scores.
2 VECTOR QUANTIZED VARIATIONAL AUTO-ENCODER
Given a training set D = {x1, ..., xN} ⊂ RV , VQ-VAE (Van Den Oord et al., 2017) aims at learning a codebook which is formed by set of codewords C = [ck] K k=1 ∈ RK×D on the latent space Z ∈ RD, an encoder fe to map the data examples to the codewords, and a decoder fd (i.e., q (x | z)) to reconstruct accurately the data examples from the codewords. Given a data example x, the encoder fe (i.e., p (z | x)) associates x to the codeword f̄e (x) = c defined as
c = argminkdz (fe (x) , ck) ,
where dz is a metric on the latent space.
The objective function of VQ-VAE is as follows: Ex∼Px [ dx ( fd ( f̄e (x) ) , x ) + dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) ] ,
where Px = 1N ∑N
n=1 δxn is the empirical data distribution, sg specifies stop gradient, dx is a cost metric on the data space, and β is set between 0.1 and 2.0 (Van Den Oord et al., 2017) and dz (fe (x) , C) = ∑ c∈C dz (fe (x) , c).
The purpose of VQ-VAE training is to form the latent representations in clusters and adjust the codewords to be the centroids of these clusters.
3 CONTROLLABLE CODEBOOKS WITH WASSERSTEIN QUANTIZATION
We present the theoretical development of our VQ-WAE framework which connects the viewpoints of the WS distance, generative models, and deep discrete representation learning in Section 3.1. Specifically, we propose to learn a ”deterministic decoder transporting the codeword to data distributions” via minimizing the WS distance between them (Figure 1a (Top)). We then turn the above WS minimization to push-forwarding the data to codeword distribution via minimizing a WS distance between ”the latent representation and codeword distributions” (Figure 1a (Bottom)). We prove that when minimizing the WS distance between the latent representation and codeword distributions, the codewords tend to flexibly move to the clustering centroids of the latent representations with a control on the proportion of latent representations associated with a centroid (Figure 1b). Based on the theoretical development, we devise a practical algorithm for VQ-WAE in Section 3.2.
3.1 THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a codebook C = {ck}Kk=1 ⊂ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in the data. We first endow a discrete distribution over the codewords as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K−1 = {π′ ≥ 0 : ∥π′∥1 = 1}. We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebook C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (1)
where Px = 1N ∑N n=1 δxn is the empirical data distribution and dx is a cost metric on the data space.
We interpret the optimization problem (OP) in Eq. (1) as follows. Given a discrete distribution Pc,π on the codewords, we use the decoder fd to map the codebook C to the data space and consider Wdx (fd#Pc,Px) as the codebook-data distortion w.r.t. fd. We subsequently learn fd to minimize the codebook-data distortion given Pc,π and finally adjust the codebook C and π to minimize the optimal codebook-data distortion. To offer more intuition for the OP in Eq. (1), we introduce the following lemma.
Lemma 3.1. Let C∗ = {c∗k}k , π ∗, and f∗d be the optimal solution of the OP in Eq. (1). Assume K < N , then C∗ = {c∗k}k , π ∗, and f∗d are also the optimal solution of the following OP:
min fd min π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (2)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.1 Lemma 3.1 states that for the optimal solution C∗ = {c∗k} , π∗, and f∗d of the OP in Eq. (1), {f∗d (c∗k)} K k=1 become the optimal clustering centroids of the optimal clustering solution which minimizes the distortion. Inspired by Wasserstein Auto-Encoder (Tolstikhin et al., 2017), we establish the following theorem to engage the OP in (1) with the latent space. Theorem 3.1. We can equivalently turn the optimization problem in (1) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (3)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebook.
First, we learn both the codebook C and the weights π. Second, ours seeks a deterministic discrete encoder f̄e mapping data example x directly to a codeword, concurring with vector quantization and serving our further derivations, whereas Theorem 1 in Tolstikhin et al. (2017) involves a probabilistic/stochastic encoder mapping to a continuous latent distribution (i.e., a larger space to search). More importantly, our proof is totally different from that in Tolstikhin et al. (2017) (all proof details are given in Appendix A).
Additionally, f̄e is a deterministic discrete encoder mapping a data example x directly to a codeword. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (4)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codeword to fe (x) and the parameter λ > 0.
Particularly, we can rigorously prove that the two optimization problems of interest in (3) and (4) are equivalent under some mild conditions in Theorem 3.2. This rationally explains why we could solve the OP in (4) for our final tractable solution. Theorem 3.2. If we seek fd and fe in a family with infinite capacity (e.g., the family of all measurable functions), the three OPs of interest in (1, 3, and 4) are equivalent.
Moreover, the OP in (4) conveys important meaningful interpretations. Specifically, by minimizing Wdz (fe#Px,Pc,π) w.r.t. C, π, we aim to learn the codewords that are clustering centroids of fe#Px according to the clustering viewpoint of OT as shown in Corollary 3.1, and similar to VQ-VAE, we quantize fe (x) to the closest codeword using QC (fe (x)) = argminc∈Cdz (fe (x) , c) and try to reconstruct x from this codebook. Corollary 3.1. Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (4) given π and assume K < N , its optimal solution f∗e and C ∗are also the optimal solution of the OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (5)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Corollary 3.1 indicates the aim of minimizing the second term Wdz (fe#Px,Pc,π) in (4). By which, we adjust the encoder fe and the codebook C such that the codewords of C become the clustering
1E.g., σ is the nearest assignment: σ−1 (k) = {f̄e (x) = ck | k = argminkdz (fe (x) , ck)} is set of latent representations which are quantized to kth codeword.
centroids of the latent representations {fe (xn)}n to minimize the codebook-latent distortion (see Figure 1 (Right)). Additionally, at the optimal solution, the optimal assignment function σ∗, which indicates how latent representations (or data examples) associated with the clustering centroids (i.e., the codewords) has a valuable property, i.e., the cardinalities
∣∣(σ∗)−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Remark: Recall the codebook collapse issue, i.e. most of embedded latent vectors are quantized to just few discrete codewords while the other codewords are rarely used. Corollary 3.1 give us important properties: (1) we can control the number of latent representations assigned to each codeword by adjust π, guaranteeing all codewords are utilized, (2) codewords become the clustering centroids of the associated latent representations to minimize the codebook-latent distortion, to develop our VQ-WAE framework.
3.2 PROPOSED FRAMEWORK
One of crucial aims of learning meaningful and well-distributed codewords is to make use of each individual codeword efficiently by solving the OP in (4). Specifically, we wish the latent representations are more uniformly associated with the codewords. Based on Corollary 3.1, pointing out that the numbers of latent representations associated with the kthcodeword is proportional to πk, we hence fix π as a uniform distribution (i.e., Pc,π = ∑K k=1 1 K δck ) to make all the codewords utilized equally by the model, hence boosting the perplexity or the codebook usage.
We now present the practical method based on the OP in (4) with Pc,π = ∑K k=1 1 K δck . At each iteration, we sample a mini-batch x1, ..., xB and then solve the OP in (4) by updating fd, fe and C based on this mini-batch as follows. Let us denote Pb = 1B ∑B j=i δxi as the empirical distribution of embedded vectors. over the current batch. Basically, we learn the optimal transportation plan P ∗ by solving:
Wdz (fe#Pb,Pc,π) = min P∈Γ(1B ,1C) ⟨P,Dc,x⟩ , (6) where 1B = [ 1 B ] B is the vector of atom masses of Pb, 1C = [ 1 C ] C
is the vector of atom masses of Pc,π , Γ (1B , 1C) is the set of feasible transportation plans, and Dc,x = [dz (xi, ck)]i,k ∈ RB×K is the cost matrix.
The pseudcode of our VQ-WAE is summarized in Algorithm 1. We use the copy gradient trick (Van Den Oord et al., 2017) to deal with the back-propagation from decoder to encoder for reconstruction term while Wasserstein regularization term Wdz (fe#Pb,Pc,π) can be optimized directly without further manipulation. Additionally, Wdz (fe#Pb,Pc,π) term is only utilized in the training phase.
Algorithm 1 VQ-WAE 1: Initialize: encoder fe, decoder fd and codebook C. 2: for iter in iterations do 3: Sample a mini-batch of samples x1, ..., xB forming the empirical batch distribution Pb 4: Encode: zi→B = fe(xi→B) // i → B : for i = 1, ..., B 5: Quantize: ci→B = argmink dz (zi→B , ck) // Nearest neighbor assignment 6: Decode: x̃i→B = fd(ci→B) 7: Optimize fe, fd and C by minimizing the objective in (4):
1
B B∑ i=1
[dx (x̃i, xi)] + λ Wdz (fe#Pb,Pc,π)︸ ︷︷ ︸ minP∈Γ(1B,1C) ⟨P,Dc,x⟩
8: end for 9: Return: The optimal fe, fd and C.
4 RELATED WORK
Variational Auto-Encoder (VAE) was first introduced by Kingma & Welling (Kingma & Welling, 2013) for learning continuous representations. However, learning discrete latent representations has
proved much more challenging because it is nearly impossible to accurately evaluate the gradients which are required to train models. To make the gradients tractable, one possible solution is to apply the Gumbel Softmax reparameterization trick (Jang et al., 2016) to VAE, which allows us to estimate stochastic gradients for updating the models. Although this technique has a low variance, it brings up a high-bias gradient estimator. Another possible solution is to employ the REINFORCE algorithm (Williams, 1992), which is unbiased but has a high variance. Additionally, the two techniques can be complementarily combined (Tucker et al., 2017).
To enable learning the discrete latent codes, VQ-VAE (Van Den Oord et al., 2017) uses deterministic encoder/decoder and encourages the codebooks to become the clustering centroids of latent representations. Additionally, the copy gradient trick is employed in back-propagating gradients from the decoder to the encoder (Bengio, 2013). Some further works were proposed to extend VQ-VAE, notably (Roy et al., 2018; Wu & Flierl, 2020). Particularly, Roy et al. (2018) uses the Expectation Maximization (EM) algorithm in the bottleneck stage to train the VQ-VAE for improving the quality of the generated images. However, to maintain the stability of this approach, we need to collect a large number of samples on the latent space. Wu & Flierl (2020) imposes noises on the latent codes and uses a Bayesian estimator to optimize the quantizer-based representation. The introduced bottleneck Bayesian estimator outputs the posterior mean of the centroids to the decoder and performs soft quantization of the noisy latent codes which have latent representations preserving the similarity relations of the data space. Recently, Takida et al. (2022) extends the standard VAE with stochastic quantization and trainable posterior categorical distribution, showing that the annealing of the stochasticity of the quantization process significantly improves the codebook utilization.
Wasserstein (WS) distance has been widely used in generative models (Arjovsky et al., 2017; Gulrajani et al., 2017; Tolstikhin et al., 2017). Arjovsky et al. Arjovsky et al. (2017) uses a dual form of WS distance to develop Wasserstein generative adversarial network (WGAN). Later, Gulrajani et al. (2017) employs the gradient penalty trick to improve the stability of WGAN. In terms of theory development, mostly related to our work is Wasserstein Auto-Encoder (Tolstikhin et al., 2017) which aims to learn continuous latent representation preserving the characteristics of input data.
5 EXPERIMENTS
In this section, we conduct extensive experiments to show the effectiveness of our proposed method compared to other advances.
Datasets: we empirically evaluate the proposed VQ-WAE in comparison with VQ-VAE (Van Den Oord et al., 2017) that is the baseline method and recently proposed SQ-VAE (Takida et al., 2022) which is the state-of-the-art work of improving the codebook usage, on four different benchmark datasets: CIFAR10 (Van Den Oord et al., 2017), MNIST (Deng, 2012), SVHN (Netzer et al., 2011), CelebA (Liu et al., 2015) and the high-resolution images dataset FFHQ Karras et al. (2019).
Implementation: For a fair comparison, we utilize the same architectures and hyper-parameters for all methods. Additionally, in the primary setting, we use the codeword (discrete latent) dimensionality of 64 and codebook size |C| = 512 for all datasets except FFHQ with codeword dimensionality of 256 and |C| = 1024, while the hyper-parameters {β, τ, λ} are specified as presented in the original papers, i.e., β = 0.25 for VQ-VAE and VQ-GAN (Esser et al., 2021), τ = 1e−5 for SQ-VAE and λ = 1 for our VQ-WAE. The details of the experimental settings are presented in Appendix C.
5.1 RESULTS ON BENCHMARK DATASETS
In order to quantitatively assess the quality of the reconstructed images, we report the results on most common evaluation metrics, including the pixel-level peak signal-to-noise ratio (PSNR), patchlevel structure similarity index (SSIM), feature-level LPIPS (Zhang et al., 2018), and dataset-level Fréchlet Inception Distance (FID) (Heusel et al., 2017). We report the test-set reconstruction results on four datasets in Table 1. With regard to the codebook utilization, we employ perplexity score which is defined as e− ∑K k=1 pck log pck where pck =
Nck∑K i=1 Nci (i.e., Nci is the number of latent
representations associated with the codeword ci) is the probability of the ith codeword being used. Note that by formula, perplexitymax = |C| as P (c) becomes to the uniform distribution, which means that all the codewords are utilized equally by the model.
We compare VQ-WAE with VQ-VAE, SQ-VAE and VQ-GAN for image reconstruction in Table 1. All instantiations of our model significantly outperform the baseline VQ-VAE under the same compression ratio, with the same network architecture. While the latest state-of-the-art SQ-VAE holds slightly better scores for traditional pixel- and patch-level metrics, our method achieves much better rFID scores which evaluate the image quality at the dataset level. Note that our VQ-WAE significantly improves the perplexity of the learned codebook. This suggests that the proposed method significantly improves the codebook usage, resulting in better reconstruction quality. Finally, to complete the assessment, the qualitative results are visualized in Figure 4 (Appendix B).
5.2 DETAILED ANALYSIS
We run a number of ablations to analyze the properties of VQ-VAE, SQ-VAE and VQ-WAE, in order to assess if our VQ-WAE can simultaneously achieve (i) efficient codebook usage, (ii) reasonable latent representation.
5.2.1 CODEBOOK USAGE
We observe the codebook utilization of three methods with different codebook sizes {64, 128, 256, 512} on MNIST and CIFAR10 datasets. Particularly, we present the reconstruction performance for different settings in Table 2 and the histogram of latent representations over the codebook in Figure 2.
As discussed in Section 3.1 and Section 3.2, the number of used centroids reflects the capability of the latent representations. In other words, it represents the certain amount of information is pre-
served in the latent space. By explicitly defining the numbers of latent representations associated with the codebooks to be uniform (i.e., fixing π in (4) as a uniform distribution) in the Wasserstein regularization term, VQ-WAE is able to maximize the information in the codebooks, hence improving the reconstruction capacity. It can be seen from Figure 2 that the latent distribution of VQ-WAE over the codebook is nearly uniform and the codebook’s perplexity almost reaches the optimal value (i.e., the value of perplexities reach to corresponding codebook sizes) in different settings. It is also observed that as the size of the codebook increases, the perplexity of codebook of VQ-WAE also increases, leading to the better reconstruction performance (Table 2), in line with the analysis in (Wu & Flierl, 2018). SQ-VAE also has good codebook utilization as its perplexity is proportional to the size of the codebook. However, its codebook utilization becomes less efficient when the codebook size becomes large, especially in low texture dataset (i.e., MNIST).
On the contrary, the codebook usage of VQ-VAE is less efficient, i.e., there are many zero entries in its codebook usage histogram, indicating that some codewords have never been used (Figure 2). Furthermore, Table 2 also shows the instability of VQ-VAE’s reconstruction performance with different codebook sizes.
5.2.2 VISUALIZATION OF LATENT REPRESENTATION
To better understand the codebook’s representation power, we employ t-SNE (van der Maaten & Hinton, 2008) to visualize the latent representations that have been learned by VQ-VAE, SQ-VAE and VQ-WAE on the MNIST dataset with two codebook sizes of 64 and 512. Figure 3 shows the latent distributions of different classes in the latent space, in which the samples are colored accordingly to their class labels. Figure 3c shows that representations from different classes of VQWAE are well clustered (i.e., each class focuses on only one cluster) and clearly separated to other classes. In contrast, the representations of some classes in VQ-VAE and SQ-VAE are distributed to several clusters and or mixed to each other (Figure 3a,b). Moreover, the class-clusters of SQ-VAE are uncondensed and tend to overlap with each other. These results suggest that the representations learned by VQ-WAE can better preserve the similarity relations of the data space better than the other models.
5.2.3 IMAGE GENERATION
As discussed in the previous section, VQ-WAE is able to optimally utilize its codebook, leading to meaningful and diverse codewords that naturally improve the image generation. To confirm this ability, we perform the image generation on the benchmark datasets. Since the decoder reconstructs images directly from the discrete embeddings, we only need to model a prior distribution over the discrete latent space (i.e., codebook) to generate images.
We employ a conventional autoregressive model, the CNN-based PixelCNN (Van den Oord et al., 2016), to estimate a prior distribution over the discrete latent space of VQ-VAE, SQ-VAE and VQWAE on CIFAR10, MNIST, SVHN and CelebA. The details of generation settings are presented
in Section 3.2 of the supplementary material. The quantitative results in Table 3 indicate that the codebook of VQ-WAE leads to a better generation ability than VQ-VAE and SQ-VAE.
6 CONCLUSION
In this paper, inspired by the nice properties and mature theory of the WS distance allowing it to be applied successfully to generative models and continous representation learning, we propose Vector Quantized Wasserstein Auto-Encoder (VQ-WAE), which endows a discrete distribution over the codewords and learns a deterministic decoder that transports the codeword distribution to the data distribution via minimizing a WS distance between them. We then developed theoretical analysis to show the equivalence of this WS minimization to another OP regarding push-forwarding the data distribution to the codeword distribution, which can be realized by minimizing a WS distance between the latent representation and codeword distributions. We conduct comprehensive experiments to show that our VQ-WAE utilizes the codebooks more efficiently than the baselines, hence leading to better reconstructed and generated image quality.
7 REPRODUCIBILITY STATEMENT
We provide the implementation of our framework in the supplementary material.
A THEORETICAL DEVELOPMENT
Given a training set D = {x1, ..., xN} ⊂ RV , we wish to learn a set of codebooks C = {ck}Kk=1 ∈ RK×D on a latent space Z and an encoder to map each data example to a given codebook, preserving insightful characteristics carried in data. We first endow a discrete distribution over the codebooks as Pc,π = ∑K k=1 πkδck with the Dirac delta function δ and the weights π ∈ ∆K . We aim to learn a decoder function fd : Z → X (i.e., mapping from the latent space Z ⊂ RD to the data space X ), the codebooks C, and the weights π, to minimize:
min C,π min fd Wdx (fd#Pc,π,Px) , (7)
where Px = 1N ∑N
n=1 δxn is the empirical data distribution and dx is a cost metric on the data space. Lemma A.1. (Lemma 3.1 in the main paper) Let C∗ = {c∗k}k , π
∗, and f∗d be the optimal solution of the OP in Eq. (7). Assume K < N , then C∗ = {c∗k}k , π
∗, and f∗d are also the optimal solution of the following OP:
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) , (8)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Proof of Lemma A.1
It is clear that
fd#Pc,π = K∑
k=1
πkδfd(ck).
Therefore, we reach the following OP:
min C,π min fd Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) . (9)
By using the Monge definition, we have
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = min
T :T#Px=fd#Pc,π Ex∼Px [dx (x, T (x))]
= 1
N min T :T#Px=fd#Pc,π N∑ n=1 dx (xn, T (xn)) .
Since T#Px = fd#Pc,π , T (xn) = fd (ck) for some k. Additionally, ∣∣T−1 (fd(ck))∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (xn) = fd(cσ(n)),∀i = 1, ..., N , we have σ ∈ Σπ . It follows that
Wdx
( 1
N N∑ n=1 δxn , K∑ k=1 πkδfd(ck)
) = 1
N min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) .
Finally, the the optimal solution of the OP in Eq. (7) is equivalent to
min fd min C,π min σ∈Σπ N∑ n=1 dx ( xn, fd ( cσ(n) )) ,
which directly implies the conclusion. Theorem A.1. (Theorem 3.1 in the main paper) We can equivalently turn the optimization problem in (7) to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (10)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
Proof of Theorem A.1
We first prove that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (11)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks. To this end, we prove that
Wdx (fd#Pc,π,Px) = min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (12)
where f̄e is a stochastic discrete encoder mapping data example x directly to the codebooks.
Let f̄e be a stochastic discrete encoder such that f̄e#Px = Pc,π (i.e., x ∼ Px and c ∼ f̄e (x) implies c ∼ Pc,π). We consider γd,c as the joint distribution of (x, c) with x ∼ Px and c ∼ f̄e (x). We also consider γfc,d as the joint distribution including (x, x′) ∼ γfc,d where x ∼ Px,c ∼ f̄e (x), and x′ = fd (c). This follows that γfc,d ∈ Γ (fd#Pc,π,Px) which admits fd#Pc,π and Px as its marginal distributions. We also have:
Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] = E(x,c)∼γd,c [dx (fd (c) , x)] (1) = E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
= Wdx (fd#Pc,π,Px) .
Note that we have the equality in (1) due to (id, fd)#γd,c = γfc,d.
Therefore, we reach
min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ≥ Wdx (fd#Pc,π,Px) .
Let γfc,d ∈ Γ (fd#Pc,π,Px). Let γfc,c ∈ Γ (fd#Pc,π,Pc,π) be a deterministic coupling such that c ∼ Pc,π and x = fd (c) imply (c, x) ∼ γc,fc. Using the gluing lemma (see Lemma 5.5 in Santambrogio (2015)), there exists a joint distribution α ∈ Γ (Pc,π, fd#Pc,π,Px) which admits γfc,d and γfc,c as the corresponding joint distributions. By denoting γd,c ∈ Γ (Px,Pc,π) as the marginal distribution of α over Px,Pc,π , we then have
E(x,x′)∼γfc,d [dx (x, x ′)] = E(c,x′,x)∼α [dx (x, x′)] = E(c,x)∼γd,c,x′=fd(c) [dx (x, x ′)]
= E(c,x)∼γd,c [dx (fd (c) , x)] = Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] . ≥ min
f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] ,
where f̄e(x) = γd,c(· | x). This follows that
Wdx (fd#Pc,π,Px) = min γfc,d∈Γ(fd#Pc,π,Px) E(x,x′)∼γfc,d [dx (x, x ′)]
≥ min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] .
This completes the proof for the equality in Eq. (12), which means that the OP of interest in (7) is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π Ex∼Px,c∼f̄e(x) [dx (fd (c) , x)] , (13)
We now further prove the above OP is equivalent to
min C,π,fd min f̄e:f̄e#Px=Pc,π
Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] , (14)
where f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks.
It is obvious that the OP in (14) is special case of that in (13) when we limit to search for deterministic discrete encoders. Given the optimal solution C∗1, π∗1, f∗1d , and f̄ ∗1 e of the OP in (13), we show how to construct the optimal solution for the OP in (14). Let us construct C∗2 = C∗1, f∗2d = f ∗1 d . Given x ∼ Px, let us denote f̄∗2e (x) = argmincdx ( f∗2d (c) , x ) . Thus, f̄∗2e is a deterministic discrete encoder mapping data example x directly to the codebooks. We define π∗2k = Pr ( f̄∗2e (x) = ck : x ∼ Px ) , k = 1, ...,K, meaning that f̄∗2e #Px = Pc∗2,π∗2 . From the construction of f̄∗2e , we have
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
Furthermore, because C∗2, π∗2, f∗2d , andf̄ ∗2 e are also a feasible solution of the OP in (14), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≥ Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] .
This means that Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] = Ex∼Px,c∼f̄∗1e (x) [ dx ( f∗1d (c) , x )] ,
and C∗2, π∗2, f∗2d , andf̄ ∗2 e are also the optimal solution of the OP in (14).
Additionally, f̄e is a deterministic discrete encoder mapping data example x directly to the codebooks. To make it trainable, we replace f̄e by a continuous encoder fe : X → Z and arrive the following OP:
min C,π min fd,fe {Ex∼Px [dx (fd (QC (fe (x))) , x)] + λWdz (fe#Px,Pc,π)} , (15)
where QC (fe (x)) = argminc∈Cdz (fe (x) , c) is a quantization operator which returns the closest codebook to fe (x) and the parameter λ > 0.
We now propose and prove the following lemma that is necessary for the proof of Theorem A.2. Lemma A.2. Consider C, π, fd, and fe as a feasible solution of the OP in (15). Let us denote f̄e(x) = argmincdz(fe(x)), c) = QC(x), then f̄e(x) is a Borel measurable function.
Proof of Lemma A.2.
We denote the set Ak on the latent space as
Ak = {z : dz(z, ck) < d(z, cj),∀j ̸= k} = {z : QC(z) = ck}. Ak is known as a Voronoi cell w.r.t. the metric dz . If we consider a continuous metric dz , Ak is a measurable set. Given a Borel measurable function B, we prove that f̄−1e (B) is a Borel measurable set on the data space.
Let B ∩ {c1, .., cK} = {ci1 , ..., cim}, we prove that f̄−1e (B) = ∪mj=1f−1e ( Aij ) . Indeed, take x ∈ f̄−1e (B), then f̄e(x) ∈ B, implying that f̄e(x) = QC(x) = cij for some j = 1, ...,m. This means that fe(x) ∈ Aij for some j = 1, ...,m. Therefore, we reach f̄−1e (B) ⊂ ∪mj=1f−1e ( Aij ) .
We now take x ∈ ∪mj=1f−1e ( Aij ) . Then fe(x) ∈ Aij for j = 1, ...,m, hence f̄e(x) = QC(x) = cij for some j = 1, ...,m. Thus, f̄e(x) ⊂ B or equivalently x ∈ f̄−1e (B), implying f̄−1e (B) ⊃ ∪mj=1f−1e ( Aij ) .
Finally, we reach f̄−1e (B) = ∪mj=1f−1e ( Aij ) , which concludes our proof because fe is a measurable function and Aij are measurable sets.
Theorem A.2. (Theorem 3.2 in the main paper) If we seek fd and fe in a family with infinite capacity (e.g., the space of all measurable functions), the three OPs of interest in (7, 10, and 15) are equivalent.
Proof of Theorem A.2.
Given the optimal solution C∗1, π∗1, f∗1d , and f ∗1 e of the OP in (15), we conduct the optimal solution for the OP in (10). Let us conduct C∗2 = C∗1, f∗2d = f ∗1 d . We next define f̄ ∗2 e (x) =
argmincdz ( f∗1e (x) , c ) = QC∗1 ( f∗1e (x) ) = QC∗2 ( f∗1e (x) ) . We prove that C∗2, π∗2, f∗2d , and f̄∗2e are optimal solution of the OP in (10). By this definition, we yield f̄ ∗2 e #Px = Pc∗2,π∗2 and
hence Wdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = 0. Therefore, we need to verify the following:
(i) f̄∗2e is a Borel-measurable function.
(ii) Given a feasible solution C, π, fd, and f̄e of (10), we have Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (16)
We first prove (i). It is a direct conclusion because the application of Lemma A.2 to C∗1, π∗1, f∗1d , and f∗1e .
We next prove (ii). We further derive as Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] + λWdz ( f̄∗2e #Px,Pc∗2,π∗2 ) = Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗2 ( f∗1e (x) )) , x )]
= Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )]
≤ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f∗1e (x) )) , x )] + λWdz ( f∗1e #Px,Pc∗1,π∗1 ) . (17)
Moreover, because f̄e#Px = Pc,π which is a discrete distribution over the set of codewords C, we obtain QC(f̄e(x)) = f̄e(x). Note that C, π, fd, and f̄e is also a feasible solution of (15) because f̄e is also a specific encoder mapping from the data space to the latent space, we achieve
Ex∼Px [ dx ( fd ( QC ( f̄e (x) )) , x )] + λWdz ( f̄e#Px,Pc,π ) ≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) ) , x ))] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) .
Noting that f̄e#Px = Pc,π and QC(f̄e(x)) = f̄e(x), we arrive at Ex∼Px [ dx ( fd ( f̄e (x) ) , x )]
≥ Ex∼Px [ dx ( f∗1d ( QC∗1 ( f̄∗1e (x) )) , x )] + λWdz ( f̄∗1e #Px,Pc∗1,π∗1 ) . (18)
Combining the inequalities in (17) and (18), we obtain Inequality (16) as
Ex∼Px [ dx ( f∗2d ( f̄∗2e (x) ) , x )] ≤ Ex∼Px [ dx ( fd ( f̄e (x) ) , x )] . (19)
This concludes our proof.
Corollary A.1. (Corollary 3.1 in the main paper) Consider minimizing the second term: minfe,C Wdz (fe#Px,Pc,π) in (15) given π and assume K < N , its optimal solution f∗e and C∗are also the optimal solution of the following OP:
min fe,C min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) , (20)
where Σπ is the set of assignment functions σ : {1, ..., N} → {1, ...,K} such that the cardinalities∣∣σ−1 (k)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K.
Proof of Corollary A.1.
By the Monge definition, we have
Wdz (fe#Px,Pc,π) = Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = min
T :T#(fe#Px)=Pc,π Ez∼fe#Px [dz (z, T (z))]
= 1
N min T :T#(fe#Px)=Pc,π N∑ n=1 dz (fe (xn) , T (fe (xn))) .
Since T#(fe#Px) = Pc,π , T (fe (xn)) = ck for some k. Additionally, ∣∣T−1 (ck)∣∣ , k = 1, ...,K are proportional to πk, k = 1, ...,K. Denote σ : {1, ..., N} → {1, ...,K} such that T (fe (xn)) = cσ(n),∀i = 1, ..., N , we have σ ∈ Σπ . It also follows that
Wdz
( 1
N N∑ n=1 δfe(xn), K∑ k=1 πkδck
) = 1
N min σ∈Σπ N∑ n=1 dz ( fe (xn) , cσ(n) ) .
B VISUALIZATION OF RECONSTRUCTION RESULTS
Qualitative assessment: We present the reconstructed samples from FFHQ (high-resolution images) for qualitative evaluation. It can be clearly seen that the high-level semantic features of the input image and colors are better preserved with VQ-WAE than the baseline. Particularly, we notice that VQGAN often produces repeated artifact patterns in image synthesis (see the hair of man is second column in Figure 4) while VQ-WAE does not. This is because VQ-GAN is lack of diversity in the codebook, which will be further analyzed in Section 5.2.1. Consequently, the quantization operator embeds similar patches into the same quantization index and ignores the variance in these patches (e.g., VQ-GAN reconstructs the background in third column of Figure 4 as hair of woman).
C EXPERIMENTAL SETTINGS
C.1 VQ-MODEL
Implementation: For fair comparison, we utilize the same framework architecture and hyperparameters for both VQ-VAE and VQ-WAE. Specifically, we construct the VQ-VAE and VQ-WAE models as follows:
• For CIFAR10, MNIST and SVHN datasets, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 2 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For CelebA dataset, the models have an encoder with two convolutional layers of stride 2 and filter size of 4 × 4 with ReLU activation, followed by 6 residual blocks, which contained a 3 × 3, stride 1 convolutional layer with ReLU activation followed by a 1 × 1 convolution. The decoder was similar, with two of these residual blocks followed by two deconvolutional layers.
• For high-quality image dataset FFHQ, we utilize the well-known VQGAN framework Esser et al. (2021) as the baseline.
We only replace the regularization module of VQ-VAE i.e., two last terms of objective function: dz (sg (fe (x)) , C) + βdz (fe (x) , sg (C)) by our proposed by Wasserstein regularization λWdz (fe#Px,Pc,π)) in Eq. (4) for VQ-WAE. Additional, we employ the POT library (Flamary et al., 2021) to compute WS distance for simplicity. However, our VQ-WAE does not require optimal transport map the from WS distance in (6) to update the model. Therefore, we can employ a wide range of speed-up algorithms to solve optimization problem (OP) in (6) such as Sinkhorn algorithm (Cuturi, 2013) or entropic regularized dual for (Genevay et al., 2016).
Hyper-parameters: following (Takida et al., 2022), we adopt the adam optimizer for training with: learning-rate is e−4, batch size of 32, embedding dimension of 64 and codebook size |C| = 512 for all datasets except FFHQ with embedding dimension of 256 and |C| = 1024. Finally, we train model for CIFAR10, MNIST, SVHN, FFHQ in 100 epoches and for CelebA in 70 epoches respectively.
C.2 GENERATION MODEL
Implementation: It is worth to noting that we employ the codebooks learned from reported VQmodels to extract codeword indices and we employ PixelCNN (Van den Oord et al., 2016) with the same setting for generation for all VQ-VAE, SQ-VAE and VQ-WAE. In particular, we feed PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space for CIAR10, MNIST, SVHN, and 16× 16 1-channel latent space for CelebA. Hyper-parameters: we adopt the adam optimizer for training with: learning-rate is 3e−4, batch size of 32. Finally, we PixelCNN over the ”pixel” values of the 8× 8 1-channel latent space in 100 epoches. | 1. What is the focus and contribution of the paper on vector quantization?
2. What are the strengths of the proposed approach, particularly in terms of its performance in experiments?
3. What are the weaknesses of the paper, especially regarding the theoretical analysis and lacking demonstrations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
A new vector quantization method is proposed, named Vector Quantized Wasserstein Auto-Encoder (VQ-WAE). The presented VQ-WAE employs the Wasserstein (WS) distance in both the observation x space and the latent z space to encourage matching, mimicking the existing Wasserstein Auto-Encoder. Experiments on MNIST, CIFAR10, SVHN, and CelebA datasets are conducted to demonstrate the superiority of the VQ-WAE to the existing VQ-VAE and SQ-VAE methods.
Strengths And Weaknesses
Strength:
(1) The paper is well-written in general.
(2) The empirical experimental results seem to be convincing.
Weaknesses:
(1) The theoretical derivations are not sound, because, in both the VQ-VAE and the presented VQ-WAE, many (like 8
×
8) latent codes correspond to one input image
x
, whereas the presented theorems are derived based on one latent code per x. In other words, there is a mismatch between theory and implementation.
(2) No explicit demonstrations like the reconstructed images are shown, which makes it difficult to evaluate the quality of the vector quantization. For example, is the reconstruction quality better than that of the VQ-GAN?
Clarity, Quality, Novelty And Reproducibility
The paper is well-written in general. The novelty is kind of incremental, even if the above-mentioned weakness (1) is corrected. The reproducibility is satisfactory as the code is available. |
ICLR | Title
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
Abstract
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
N/A
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
1 INTRODUCTION
Research in adversarial examples continues to contribute to the development of robust (semi-)supervised learning (Miyato et al., 2018), data augmentation (Goodfellow et al., 2015; Sun et al., 2018), and machine learning understanding (Kanbak et al., 2018). One important caveat of the approach pursued by much of the literature in adversarial machine learning, as discussed recently (Goodfellow,
2018; Gilmer et al., 2018), is the reliance on overly simplified attack metrics: namely, the use of pixel value differences between an adversary and an input image, also referred to as the pixel norm-balls.
The pixel norm-balls game considers pixel perturbations of norm-constrained magnitude (Goodfellow et al., 2015), and is used to develop adversarial attackers, defenders and training strategies. The pixel norm-ball game is attractive from a research perspective due to its simplicity and well-posedness: no knowledge of image formation is required and any arbitrary pixel perturbation remains eligible (so long as it is “small”, in the perceptual sense). Although the pixel norm-ball is useful for research purposes, it only captures limited real-world security scenarios.
Despite the ability to devise effective adversarial methods through the direct employment of optimizations using the pixel norm-balls measure, the pixel manipulations they promote are divorced from the types of variations present in the real world, limiting their usefulness “in the wild”. Moreover, this methodology leads to defenders that are only effective when defending against unrealistic images/attacks, not generalizing outside of the space constrained by pixel norm-balls. In order to consider conditions that enable adversarial attacks in the real world, we advocate for a new measurement norm that is rooted in the physical processes that underly realistic image synthesis, moving away from overly simplified metrics, e.g., pixel norm-balls.
Our proposed solution – parametric norm-balls – rely on perturbations of physical parameters of a synthetic image formation model, instead of pixel color perturbations (Figure 2). To achieve this, we use a physically-based differentiable renderer which allows us to perturb the underlying parameters of the image formation process. Since these parameters indirectly control pixel colors, perturbations in this parametric space implicitly span the space of natural images. We will demonstrate two advantages that fall from considering perturbations in this parametric space: (1) they enable adversarial approaches that more readily apply to real-world applications, and (2) they permit the use of much more significant perturbations (compared to pixel norms), without invalidating the realism of the resulting image (Figure 1). We validate that parametric norm-balls game playing is critical for a variety of important adversarial tasks, such as building defenders robust to perturbations that can occur naturally in the real world.
We perform perturbations in the underlying image formation parameter space using a novel physicallybased differentiable renderer. Our renderer analytically computes the derivatives of pixel color with respect to these physical parameters, allowing us to extend traditional pixel norm-balls to physicallyvalid parametric norm-balls. Notably, we demonstrate perturbations on an environment’s lighting and on the shape of the 3D geometry it shades. Our differentiable renderer achieves state-of-the-art performance in speed and scalability (Section 3) and is fast enough for rendered adversarial data augmentation (Section 5): training augmented with adversarial images generated with a renderer.
Existing differentiable renders are slow and do not scalable to the volume of high-quality, highresolutions images needed to make adversarial data augmentation tractable (Section 2). Given our analytically-differentiable renderer (Section 3), we are able to demonstrate the efficacy of parametric space perturbations for generating adversarial examples. These adversaries are based on a substantially different phenomenology than their pixel norm-balls counterparts (Section 4). Ours is among the first steps towards the deployment of rendered adversarial data augmentation in real-world applications: we train a classifier with computer-generated adversarial images, evaluating the performance of the training against real photographs (i.e., captured using cameras; Section 5). We test on real photos to show the parametric adversarial data augmentation increases the classifier’s robustness to “deformations” happened in the real world. Our evaluation differs from the majority of existing literature which evaluates against computer-generated adversarial images, since our parametric space perturbation is no-longer a wholly idealized representation of the image formation model but, instead, modeled against of theory of realistic image generation.
2 RELATED WORK
Our work is built upon the fact that simulated or rendered images can participate in computer vision and machine learning on real-world tasks. Many previous works use rendered (simulated) data to train deep networks, and those networks can be deployed to real-world or even outperform the state-of-the-art networks trained on real photos (Movshovitz-Attias et al., 2016; Chen et al., 2016; Varol et al., 2017; Su et al., 2015; Johnson-Roberson et al., 2017; Veeravasarapu et al., 2017b; Sadeghi & Levine, 2016; James & Johns, 2016). For instance, Veeravasarapu et al. (2017a) show that training with 10% real-world data and 90% simulation data can reach the level of training with full real data. Tremblay et al. (2018) even demonstrate that the network trained on synthetic data yields a better performance than using real data alone. As rendering can cheaply provide a theoretically infinite supply of annotated input data, it can generate data which is orders of magnitude larger than existing datasets. This emerging trend of training on synthetic data provides an exciting direction for future machine learning development. Our work complements these works. We demonstrate the utility of rendering can be used to study the potential danger lurking in misclassification due to subtle changes to geometry and lighting. This provides a future direction of combining with synthetic data generation pipelines to perform physically based adversarial training on synthetic data.
Adversarial Examples Szegedy et al. (2014) expose the vulnerability of modern deep neural nets using purposefully-manipulated images with human-imperceptible misclassification-inducing noise. Goodfellow et al. (2015) introduce a fast method to harness adversarial examples, leading to the idea of pixel norm-balls for evaluating adversarial attackers/defenders. Since then, many significant developments in adversarial techniques have been proposed (Akhtar & Mian, 2018; Szegedy et al., 2014; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018; Papernot et al., 2017; Moosavi-Dezfooli et al., 2017; Chen et al., 2017; Su et al., 2017). Our work extends this progression in constructing adversarial examples, a problem that lies at the foundation of adversarial machine learning. Kurakin et al. (2016) study the transferability of attacks to the physical world by printing then photographing adversarial images. Athalye et al. (2017) and Eykholt et al. (2018) propose extensions to non-planar (yet, still fixed) geometry and multiple viewing angles. These works still rely fundamentally on the direct pixel or texture manipulation on physical objects. Since these methods assume independence between pixels in the image or texture space they remain variants of pixel norm-balls. This leads to unrealistic attack images that cannot model real-world scenarios (Goodfellow, 2018; Hendrycks & Dietterich, 2018; Gilmer et al., 2018). Zeng et al. (2017) generate adversarial examples by altering physical parameters using a rendering network (Liu et al., 2017) trained to approximate the physics of realistic image formation. This data-driven approach leads to an image formation model biased towards the rendering style present in the training data. This method also relies on differentiation through the rendering network in order to compute adversaries, which requires high-quality training on a large amount of data. Even with perfect training, in their reported performance, it still requires 12 minutes on average to find new adversaries, we only take a few seconds Section 4.1. Our approach is based on a differentiable physically-based renderer that directly (and, so, more convincingly) models the image formation process, allowing us to alter physical parameters – like geometry and lighting – and compute
derivatives (and adversarial examples) much more rapidly compared to the (Zeng et al., 2017). We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1.
Differentiable Renderer Applying parametric norm-balls requires that we differentiate the image formation model with respect to the physical parameters of the image formation model. Modern realistic computer graphics models do not expose facilities to directly accommodate the computation of derivatives or automatic differentiation of pixel colors with respect to geometry and lighting variables. A physically-based
differentiable renderer is fundamental to computing derivative of pixel colors with respect to scene parameters and can benefit machine learning in several ways, including promoting the development of novel network architectures (Liu et al., 2017), in computing adversarial examples (Athalye et al., 2017; Zeng et al., 2017), and in generalizing neural style transfer to a 3D context (Kato et al., 2018; Liu et al., 2018). Recently, various techniques have been proposed to obtain these derivatives: Wu et al. (2017); Liu et al. (2017); Eslami et al. (2016) use neural networks to learn the image formation process provided a large amount of input/output pairs. This introduces unnecessary bias in favor of the training data distribution, leading to inaccurate derivatives due to imperfect learning. Kato et al. (2018) propose a differentiable renderer based on a simplified image formation model and an underlying linear approximation. Their approach requires no training and is unbiased, but their approximation of the image formation and the derivatives introduce more errors. Loper & Black (2014); Genova et al. (2018) use automatic differentiation to build fully differentiable renderers. These renderers, however, are expensive to evaluate, requiring orders of magnitude more computation and much larger memory footprints compared to our method.
Our novel differentiable renderer overcomes these limitations by efficiently computing analytical derivatives of a physically-based image formation model. The key idea is that the non-differentiable visibility change can be ignored when considering infinitesimal perturbations. We model image variations by changing geometry and realistic lighting conditions in an analytically differentiable manner, relying on an accurate model of diffuse image formation that extend spherical harmonicsbased shading methods (Appendix C). Our analytic derivatives are efficient to evaluate, have scalable memory consumption, are unbiased, and are accurate by construction (Table 2). Our renderer explicitly models the physics of the image formation processes, and so the images it generates are realistic enough to illicit correct classifications from networks trained on real-world photographs.
3 ADVERSARIAL ATTACKS IN PARAMETRIC SPACES
Adversarial attacks based on pixel norm-balls typically generate adversarial examples by defining a cost function over the space of images C : I → R that enforces some intuition of what failure should look like, typically using variants of gradient descent where the gradient ∂C/∂I is accessible by differentiating through networks (Szegedy et al., 2014; Goodfellow et al., 2015; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018).
The choices for C include increasing the cross-entropy loss of the correct class (Goodfellow et al., 2015), decreasing the cross-entropy loss of the least-likely class (Kurakin et al., 2017), using a combination of cross-entropies (Moosavi Dezfooli et al., 2016), and more (Szegedy et al., 2014; Rozsa et al., 2016; Dong et al., 2018; Tramèr et al., 2017). We combine of cross-entropies to provide flexibility for choosing untargeted and targeted attacks by specifying a different set of labels:
C ( I(U, V ) ) = −CrossEntropy ( f(I(U, V )), Ld ) + CrossEntropy ( f(I(U, V )), Li ) , (1)
where I is the image, f(I) is the output of the classifier, Ld, Li are labels which a user wants to decrease and increase the predicted confidences respectively. In our experiments, Ld is the correct class and Li is either ignored or chosen according to user preference. Our adversarial attacks in the parametric space consider an image I(U, V ) is the function of physical parameters of the image formation model, including the lighting U and the geometry V . Adversarial examples constructed by perturbing physical parameters can then be computed via the chain rule
∂C ∂U = ∂C ∂I ∂I ∂U
∂C ∂V = ∂C ∂I ∂U ∂V , (2)
where ∂I/∂U, ∂I/∂V are derivatives with respect to the physical parameters and we evaluate using our physically based differentiable renderer. In our experiments, we use gradient descent for finding parametric adversarial examples where the gradient is the direction of ∂I/∂U, ∂I/∂V .
3.1 PHYSICALLY BASED DIFFERENTIABLE RENDERER
Rendering is the process of generating a 2D image from a 3D scene by simulating the physics of light. Light sources in the scene emit photons that then interact with objects in the scene. At each interaction, photons are either reflected, transmitted or absorbed, changing trajectory and repeating until arriving at a sensor such as a camera. A physically based renderer models the interactions mathematically (Pharr et al., 2016), and our task is to analytically differentiate the physical process.
We develop our differentiable renderer with common assumptions in real-time rendering (AkenineMoller et al., 2008) – diffuse material, local illumination, and distant light sources. Our diffuse material assumption considers materials which reflect lights uniformly for all directions, equivalent to considering non-specular objects. We assume that variations in the material (texture) are piece-wise constant with respect to our triangle mesh discretization. The local illumination assumption only considers lights that bounce directly from the light source to the camera. Lastly, we assume light sources are far away from the scene, allowing us to represent lighting with one spherical function. For a more detailed rationale of our assumptions, we refer readers to Appendix B).
These assumptions simplify the complicated integral required for rendering (Kajiya, 1986) and allow us to represent lighting in terms of spherical harmonics, an orthonormal basis for spherical functions analogous to Fourier transformation. Thus, we can analytically differentiate the rendering equation to acquire derivatives with respect to lighting, geometry, and texture (derivations found in Appendix C).
Using analytical derivatives avoids pitfalls of previous differentiable renderers (see Section 2) and make our differentiable renderer orders of magnitude faster than the previous fully differentiable renderer OPENDR (Loper & Black, 2014) (see Figure 3). Our approach is scalable to handle problems with more than
100,000 variables, while OPENDR runs out of memory for problems with more than 3,500 variables.
3.2 ADVERSARIAL LIGHTING AND GEOMETRY
Adversarial lighting denotes adversarial examples generated by changing the spherical harmonics lighting coefficients U (Green, 2003). As our differentiable renderer allows us to compute ∂I/∂U analytically (derivation is provided in Appendix C.4), we can simply apply the chain rule:
U ← U − γ ∂C ∂I ∂I ∂U , (3)
where ∂C/∂I is the derivative of the cost function with respect to pixel colors and can be obtained by differentiating through the network. Spherical harmonics act as an implicit constraint to prevent unrealistic lighting because natural lighting environments everyday life are dominated by lowfrequency signals. For instance, rendering of diffuse materials can be approximated with only 1% pixel intensity error by the first 2 orders of spherical harmonics (Ramamoorthi & Hanrahan, 2001). As computers can only represent a finite number of coefficients, using spherical harmonics for lighting implicitly filters out high-frequency, unrealistic lightings. Thus, perturbing the parametric space of spherical harmonics lighting gives us more realistic compared to image-pixel perturbations Figure 1.
Adversarial geometry is an adversarial example computed by changes the position of the shape’s surface. The shape is encoded as a triangle mesh with |V | vertices and |F | faces, surface points are vertex positions V ∈ R|V |×3 which determine per-face normals N ∈ R|F |×3 which in turn determine the shading of the surface. We can compute adversarial shapes by applying the chain rule:
V ← V − γ ∂C ∂I ∂I ∂N ∂N ∂V , (4)
ni hij
vj where ∂I/∂N is computed via a derivation in Appendix E. Each triangle only has one normal on its face, making ∂N/∂V computable analytically. In particular, the 3× 3 Jacobian of a unit face normal vector ni ∈ R3 of the jth face of the triangle mesh V with respect to one of its corner vertices vj ∈ R3 is
∂ni ∂vj = hijn
T i
‖hij‖2 ,
where hij ∈ R3 is the height vector: the shortest vector to the corner vj from the opposite edge.
4 RESULTS AND EVALUATION
We have described how to compute adversarial examples by parametric perturbations, including lighting and geometry. In this section, we show that adversarial examples exist in the parametric spaces, then we analyze the characteristics of those adversaries and parametric norm-balls.
We use 49× 3 spherical harmonics coefficients to represent environment lighting, with an initial realworld lighting condition (Ramamoorthi & Hanrahan, 2001). Camera parameters and the background images are empirically chosen to have correct initial classifications and avoid synonym sets. In Figure 4 we show that single-view adversarial lighting attack can fool the classifier (pre-trained ResNet-101 on ImageNet (He et al., 2016)). Figure 5 shows multi-view adversarial lighting, which optimizes the summation of the cost functions for each view, thus the gradient is computed as the summation over all camera views:
U ← U − ∑
i∈cameras γ ∂C ∂Ii ∂Ii ∂U . (5)
If one is interested in a more specific subspace, such as outdoor lighting conditions governed by sunlight and weather, our adversarial lighting can adapt to it. In Figure 7, we compute adversarial lights over the space of skylights by applying one more chain rule to the Preetham skylight parameters (Preetham et al., 1999; Habel et al., 2008). Details about taking these derivatives are provided in Appendix D. Although adversarial skylight exists, its low degrees of freedom (only three parameters) makes it more difficult to find adversaries.
In Figure 8 and Figure 9 we show the existence of adversarial
geometry in both single-view and multi-view cases. Note that we upsample meshes to have >10K vertices as a preprocessing step to increase the degrees of freedom available for perturbations. Multiview adversarial geometry enables us to perturb the same 3D shape from different viewing directions, which enables us to construct a deep optical illusion: The same 3D shape are classified differently from different angles. To create the optical illusion in Figure 6, we only need to specify the Li in Equation (1) to be a dog and a cat for two different views.
4.1 PROPERTIES OF PARAMETRIC NORM-BALLS AND ADVERSARIES
To further understand parametric adversaries, we analyze how do parametric adversarial examples generalize to black-box models. In Table 3, we test 5,000 ResNet parametric adversaries on unseen networks including AlexNet (Krizhevsky et al., 2012), DenseNet (Huang et al., 2017), SqueezeNet (Iandola et al., 2016), and VGG (Simonyan & Zisserman, 2014). Our result shows that parametric adversarial examples also share across models.
In addition to different models, we evaluate parametric adversaries on black-box viewing directions. This evaluation mimics the real-world scenario that a self-driving car would “see” a stop sign from different angles while driving. In Table 4, we randomly sample 500 correctly classified views for a given shape and perform adversarial lighting and geometry algorithms only on a subset of views, then evaluate the resulting adversarial lights/shapes on all the views. The results show that adversarial lights are more generalizable to fool unseen views; adversarial shapes, yet, are less generalizable.
Switching from pixel norm-balls to parametric norm-balls only requires to change the normconstraint from the pixel color space to the parametric space. For instance, we can perform a quantitative comparison between parametric adversarial and random perturbations in Figure 10. We use L∞-norm = 0.1 to constraint the perturbed magnitude of each lighting coefficient, and L∞-norm = 0.002 to constrain the maximum displacement of surface points along each axis. The results show how many parametric adversaries can fool the classifier out of 10,000 adversarial lights and shapes respectively. Not only do the
parametric norm-balls show the effectiveness of adversarial perturbation, evaluating robustness using parametric norm-balls has real-world implications.
Table 3: We evaluate ResNet adversaries on unseen models and show that parametric adversarial examples also share across models. The table shows the success rate of attacks (%).
Alex VGG Squeeze Dense
Lighting 81.2% 65.0% 78.6% 43.5% Geometry 70.3% 58.9% 71.1% 40.1%
Table 4: We compute parametric adversaries using a subset of views (#Views) and evaluate the success rates (%) of attacks on unseen views.
#Views 0 1 5
Lighting 0.0% 29.4% 64.2% Geometry 0.0% 0.6% 3.6%
Time (sec.) Adversarial Lighting
105 106
54K vertices 25K vertices 15K vertices
105 106
54K vertices 25K vertices 15K vertices 1 .1
.01
Adversarial Geometry
#pixel Runtime The inset presents our runtime per iteration for computing derivatives. An adversary normally requires less than 10 iterations, thus takes a few seconds. We evaluate our CPU PYTHON implementation and the OPENGL rendering, on an Intel Xeon 3.5GHz CPU with 64GB of RAM and an NVIDIA GeForce GTX 1080. Our runtime depends on the number of pixels requiring derivatives.
5 RENDERED ADVERSARIAL DATA AUGMENTATION AGAINST REAL PHOTOS
We inject adversarial examples, generated using our differentiable renderer, into the training process of modern image classifiers. Our goal is to increase the robustness of these classifiers to real-world perturbations. Traditionally, adversarial training is evaluated against computer-generated adversarial images (Kurakin et al., 2017; Madry et al., 2018; Tramèr et al., 2017). In contrast, our evaluation differs from the majority of the literature, as we evaluate performance against real photos (i.e., images captured using a camera), and not computer-generated images. This evaluation method is motivated by our goal of increasing a classifier’s robustness to “perturbations” that occur in the real world and result from the physical processes underlying real-world image formation. We present preliminary steps towards this objective, resolving the lack of realism of pixel norm-balls and evaluating our augmented classifiers (i.e., those trained using our rendered adversaries) against real photographs.
Training We train the WideResNet (16 layers, 4 wide factor) (Zagoruyko & Komodakis, 2016) on CIFAR-100 (Krizhevsky & Hinton, 2009) augmented with adversarial lighting examples. We apply a common adversarial training method that adds a fixed number of adversarial examples each epoch (Goodfellow et al., 2015; Kurakin et al., 2017). We refer readers to Appendix F for the training detail. In our experiments, we compare three training scenarios: (1) CIFAR-100, (2) CIFAR-100 + 100 images under random lighting, and (3) CIFAR-100 + 100 images under adversarial lighting. Comparing to the accuracy reported in (Zagoruyko & Komodakis, 2016), WideResNets trained on these three cases all have comparable performance (≈ 77%) on the CIFAR-100 test set.
Testing We create a test set of real photos, captured in a laboratory setting with controlled lighting and camera parameters: we photographed oranges using a calibrated Prosilica GT 1920 camera under different lighting conditions, each generated by projecting different lighting patterns using an LG PH550 projector. This hardware lighting setup projects lighting patterns from a fixed solid angle of directions onto the scene objects. Figure 11 illustrates samples from the 500 real photographs of our dataset. We evaluate the robustness of our classifier models according to test accuracy. Of note, average prediction accuracies over five trained WideResNets on our test data under the three training cases are (1) 4.6%, (2) 40.4%, and (3) 65.8%. This result
supports the fact that training on rendered images can improve the networks’ performance on real photographs. Our preliminary experiments motivate the potential of relying on rendered adversarial training to increase the robustness to visual phenomena present in the real-world inputs.
6 LIMITATIONS & FUTURE WORK
Using parametric norm-balls to remove the lack of realism of pixel norm-balls is only the first step to bring adversarial machine learning to real-world. More evaluations beyond the lab experimental data could uncover the potential of the rendered adversarial data augmentation. Coupling the differentiable renderer with methods for reconstructing 3D scenes, such as (Veeravasarapu et al., 2017b; Tremblay et al., 2018), has the potential to develop a complete pipeline for rendered adversarial training. We can take a small set of real images, constructing 3D virtual scenes which have real image statistics, using our approach to manipulate the predicted parameters to construct the parametric adversarial
examples, then perform rendered adversarial training. This direction has the potential to produce limitless simulated adversarial data augmentation for real-world tasks.
Our differentiable renderer models the change of realistic environment lighting and geometry. Incorporating real-time rendering techniques from the graphics community could further improve the quality of rendering. Removing the locally constant texture assumption could improve our results. Extending the derivative computation to materials could enable “adversarial materials”. Incorporating derivatives of the visibility change and propagating gradient information to shape skeleton could also create “adversarial poses”. These extensions offer a set of tools for modeling real security scenarios. For instance, we can train a self-driving car classifier that can robustly recognize pedestrians under different poses, lightings, and cloth deformations.
ACKNOWLEDGMENTS
This work is funded in part by NSERC Discovery Grants (RGPIN–2017–05235 & RGPAS–2017–507938), Connaught Funds (NR2016–17), the Canada Research Chairs Program, the Fields Institute, and gifts by Adobe Systems Inc., Autodesk Inc., MESH Inc. We thank members of Dynamic Graphics Project for feedback and draft reviews; Wenzheng Chen for photography equipments; Colin Raffel and David Duvenaud for discussions and feedback.
Supplementary Material
A COMPARISON BETWEEN PERTURBATION SPACES
We extend our comparisons against pixel norm-balls methods (Figure 1) by visualizing the results and the generated perturbations (Figure 12). We hope this figure elucidates that our parametric perturbation are more realistic several scales of perturbations.
B PHYSICALLY BASED RENDERING
Physically based rendering (PBR) seeks to model the flow of light, typically the assumption that there exists a collection of light sources that generate light; a camera that receives this light; and a scene that modulates the flow light between the light sources and camera (Pharr et al., 2016). What follows is a brief discussion of the general task of rendering an image from a scene description and the approximations we take in order to make our renderer efficient yet differentiable.
Computer graphics has dedicated decades of effort into developing methods and technologies to enable PBR to synthesize of photorealistic images under a large gamut of performance
requirements. Much of this work is focused around taking approximations of the cherished Rendering equation (Kajiya, 1986), which describes the propagation of light through a point in space. If we let uo be the output radiance, p be the point in space, ωo be the output direction, ue be the emitted radiance, ui be incoming radiance, ωi be the incoming angle, fr be the way light be reflected off the material at that given point in space we have:
uo(p, ωo) = ue(p, ωo) + ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi.
From now on we will ignore the emission term ue as it is not pertinent to our discussion. Furthermore, because the speed of light is substantially faster than the exposure time of our eyes, what we perceive is not the propagation of light at an instant, but the steady state solution to the rendering equation evaluated at every point in space. Explicitly computing this steady state is intractable for our applications and will mainly serve as a reference for which to place a plethora of assumptions and simplifications we will make for the sake of tractability. Many of these methods focus on ignoring light with nominal effects on the final rendered image vis a vis assumptions on the way light travels. For instance, light is usually assumed to have nominal interacts with air, which is described as the assumption that the space between objects is a vacuum, which constrains the interactions of light to the objects in a scene. Another common assumption is that light does not penetrate objects, which makes it difficult to render objects like milk and human skin1. This constrains the complexity of light propagation to the behavior of light bouncing off of object surfaces.
B.1 LOCAL ILLUMINATION
It is common to see assumptions that limit number of bounces light is allowed.In our case we chose to assume that the steady state is sufficiently approximated by an extremely low number of iterations: one. This means that it seems sufficient to model the lighting of a point in space by the light sent to it directly by light sources. Working with such a strong simplification does, of course, lead to a few artifacts. For instance, light occluded by other objects is ignored so shadows disappear and auxiliary techniques are usually employed to evaluate shadows (Williams, 1978; Miller, 1994).
When this assumption is coupled with a camera we approach what is used in standard rasterization systems such as OPENGL (Shreiner & Group, 2009), which is what we use. These systems compute the illumination of a single pixel by determining the fragment of an object visible through that pixel and only computing the light that traverses directly from the light sources, through that fragment, to that pixel. The lighting of a fragment is therefore determined by a point and the surface normal at that point, so we write the fragment’s radiance as R(p,n, ωo) = uo(p, ωo):
R(p,n, ωo) = ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi. (6)
B.2 LAMBERTIAN MATERIAL
Lambertian Non-Lambertian
Figure 15: We consider the Lambertian material (left) where lights get reflected uniformly in every direction.
Each point on an object has a model approximating the transfer of incoming light to a given output direction fr, which is usually called the material. On a single object the material parameters may vary quite a bit and the correspondence between points and material parameters is usually called the texture map which forms the texture of an object. There exists a wide gamut of material models, from mirror materials that transport light from a single input direction to a single output direction, to materials that reflect light evenly in all directions, to materials liked brushed metal that reflect differently along different angles. For the sake
of document we only consider diffuse materials, also called Lambertian materials, where we assume
1this is why simple renderers make these sorts of objects look like plastic
that incoming light is reflected uniformly, i.e fr is a constant function with respect to angle, which we denote fr(p, ωi, ωo) = ρ(p):
R(p,n) = ρ(p) ∫ Ω(n) u(p, ω)(ω · n)dω. (7)
This function ρ is usually called the albedo, which can be perceived as color on the surface for diffuse material, and we reduce our integration domain to the upper hemisphere Ω(n) in order to model light not bouncing through objects. Furthermore, since only the only ω and u are the incoming ones we can now suppress the “incoming” in our notation and just use ω and u respectively.
B.3 ENVIRONMENT MAPPING
The illumination of static, distant objects such as the ground, the sky, or mountains do not change in any noticeable fashion when objects in a scene are moved around, so u can be written entirely in terms of ω, u(p, ω) = u(ω). If their illumination forms a constant it seems prudent to pre-compute or cache their contributions to the illumination of a scene. This is what is usually called environment mapping and they fit in the rendering equation as a representation for the total lighting of a scene, i.e the total incoming radiance ui. Because the environment is distant, it is common to also assume that the position of the object receiving light from an environment map does not matter so this simplifies ui to be independent of position:
R(p,n) = ρ(p) ∫ Ω(n) u(ω) (ω · n) dω. (8)
B.4 SPHERICAL HARMONICS
Despite all of our simplifications, the inner integral is still a fairly generic function over S2. Many techniques for numerically integrating the rendering equation have emerged in the graphics community and we choose one which enables us to perform pre-computation and select a desired spectral accuracy: spherical harmonics. Spherical harmonics are a basis on S2 so, given a spherical harmonics expansion of the integrand, the evaluation of the above integral can be reduced to a weighted product of coefficients. This particular basis is chosen because it acts as a sort of Fourier basis for functions on the sphere and so the bases are each associated with a frequency, which leads to a convenient multi-resolution structure. In fact, the rendering of diffuse objects under distant lighting can be 99% approximated by just the first few spherical harmonics bases (Ramamoorthi & Hanrahan, 2001).
We will only need to note that the spherical harmonics bases Y ml are denoted with the subscript with l as the frequency and that there are 2l + 1 functions per frequency, denoted by superscripts m between −l to l inclusively. For further details on them please take a glance at Appendix C. If we approximate a function f in terms of spherical harmonics coefficients f ≈ ∑ lm fl,mY m l the integral can be precomputed as∫ S2 f ≈ ∫ S2 ∑ lm fl,mY m l = ∑ lm fl,m ∫ S2 Y ml , (9)
Thus we have defined a reduced rendering equation that can be efficiently evaluated using OPENGL while maintaining differentiability with respect to lighting and vertices. In the following appendix we will derive the derivatives necessary to implement our system.
C DIFFERENTIABLE RENDERER
Rendering computes an image of a 3D shape given lighting conditions and the prescribed material properties on the surface of the shape. Our differentiable renderer assumes Lambertian reflectance, distant light sources, local illumination, and piece-wise constant textures. We will discuss how to explicitly compute the derivatives used in the main body of this text. Here we give a detailed discussion about spherical harmonics and their advantages.
C.1 SPHERICAL HARMONICS
Spherical harmonics are usually defined in terms of the Legendre polynomials, which are a class of orthogonal polynomials defined by the recurrence relation
P0 = 1 (10) P1 = x (11)
(l + 1)Pl+1(x) = (2l + 1)xPl(x)− lPl−1(x). (12)
The associated Legendre polynomials are a generalization of the Legendre polynomials and can be fully defined by the relations
P 0l = Pl (13) (l −m+ 1)Pml+1(x) = (2l + 1)xPml (x)− (l +m)Pml−1(x) (14)
2mxPml (x) = − √ 1− x2 [ Pm+1l (x) + (l +m)(l −m+ 1)P m−1 l (x) ] . (15)
Using the associated Legendre polynomials Pml we can define the spherical harmonics basis as
Y ml (θ, φ) = K m l (−1)m √ 2P−ml (cos θ) sin(−mφ) m < 0 (−1)m √ 2Pml (cos θ) cos(mφ) m > 0
P 0l (cos θ) m = 0
. (16)
where Kml =
√ (2l + 1)(l − |m|)!
4π(l + |m|)! . (17)
We will use the fact that the associated Legendre polynomials correspond to the spherical harmonics bases that are rotationally symmetric along the z axis (m = 0).
In order to incorporate spherical harmonics into Equation 8, we change the integral domain from the upper hemisphere Ω(n) back to S2 via a max operation
R(p,n) = ρ(p) ∫ Ω(n) u(ω)(ω · n)dω (18)
= ρ(p) ∫ S2 u(ω) max(ω · n, 0)dω. (19)
We see that the integral is comprised of two components: a lighting component u(ω) and a component that depends on the normal max(ω · n, 0). The strategy is to pre-compute the two components by projecting onto spherical harmonics, and evaluating the integral via a dot product at runtime, as we will now derive.
C.2 LIGHTING IN SPHERICAL HARMONICS
Approximating the lighting component u(ω) in Equation 19 using spherical harmonics Y ml up to band n can be written as
u(ω) ≈ n∑ l=0 l∑ m=−l Ul,mY m l (ω),
where Ul,m ∈ R are coefficients. By using the orthogonality of spherical harmonics we can use evaluate these coefficients as an integral between u(ω) and Y ml (ω)
Ul,m = 〈u, Y ml 〉S2 = ∫ S2 u(ω)Y ml (ω)dω,
which can be evaluated via quadrature.
C.3 CLAMPED COSINE IN SPHERICAL HARMONICS
So far, we have projected the lighting term u(ω) onto the spherical harmonics basis. To complete evaluating Equation 19 we also need to approximate the second component max(ω ·n, 0) in spherical
harmonics. This is the so-called the clamped cosine function.
g(ω,n) = max(ω · n, 0) = n∑ l=0 l∑ m=−l Gl,m(n)Y m l (ω),
where Gl,m(n) ∈ R can be computed by projecting g(ω,n) onto Y ml (ω)
Gl,m(n) = ∫ S2 max(ω · n, 0)Y ml (ω)dω.
Unfortunately, this formulation turns out to be tricky to compute. Instead, the common practice is to analytically compute the coefficients for unit z direction G̃l,m = Gl,m(nz) = Gl,m([0, 0, 1]ᵀ) and evaluate the coefficients for different normals Gl,m(n) by rotating G̃l,m. This rotation, G̃l,m, can be computed analytically:
G̃l,m = ∫ S2 max(ω · nz, 0)Y ml (ω)dω
= ∫ 2π 0 ∫ π 0 max([sin θ cosφ, sin θ sinφ, cos θ][0, 0, 1]ᵀ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π 0 max(cos θ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π/2 0 cos θ Y ml (θ, φ) sin θdθdφ. (20)
In fact, because max(ω ·nz, 0) is rotationally symmetric around the z-axis, its projection onto Y ml (ω) will have many zeros except the rotationally symmetric spherical harmonics Y 0l . In other words, G̃l,m is non-zero only when m = 0. So we can simplify Equation 20 to
G̃l = G̃l,0 = 2π ∫ π/2 0 cos θ Y 0l (θ) sin θdθ.
The evaluation of this integral can be found in Appendix A in (Basri & Jacobs, 2003). We provide this here as well:
G̃l = √ π 2 l = 0√ π 3 l = 1 (−1) l2 +1 (l−2)! √ (2l+1)π 2l( l2−1)!( l 2 +1)!
l ≥ 2, even 0 l ≥ 2, odd
.
The spherical harmonics coefficients Gl,m(n) of the clamped cosine function g(ω,n) can be computed by rotating G̃l (Sloan et al., 2005) using this formula
Gl,m(n) =
√ 4π
2l + 1 G̃l Y
m l (n). (21)
So far we have projected the two terms in Equation 19 into the spherical harmonics basis. Orthogonality of spherical harmonics makes the evaluation of this integral straightforward:∫
S2 u(ω) max(ω · n, 0)dω = ∫ S2 [∑ l,m Ul,mY m l (ω) ][∑ j,k Gj,k(n)Y k j (ω) ] dω
= ∑ j,k,l,m Ul,mGj,k(n)δ l jδ m k (22)
= ∑ l,m Ul,mGl,m(n). (23)
This, in conjunction with Equation 21allows us to derive the rendering equation using spherical harmonics lighting for Lambertian objects:
R(p,n) = ρ(p) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (n). (24)
So far we have only considered the shading of a specific point p with surface normal n. If we consider the rendered image I given a shape V , lighting U , and camera parameters η, the image I is the evaluation of the rendering equation R of each point in V visible through each pixel in the image. This pixel to point mapping is determined by η. Therefore, we can write I as
I(V,U, η) = ρ(V, η) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y m l (N(V ))︸ ︷︷ ︸
F (V,U)
, (25)
where N(V ) is the surface normal. We exploit the notation and use ρ(V, η) to represent the texture of V mapped to the image space through η.
C.4 LIGHTING AND TEXTURE DERIVATIVES
For our applications we must differentiate Equation 25 with respect to lighting and material parameters. The derivative with respect to the lighting coefficients U can be obtained by
∂I ∂U = ∂ρ ∂U F + ρ ∂F ∂U (26)
= 0 + ρ n∑ l=0 l∑ m=−l ∂F ∂Ul,m . (27)
This is the Jacobian matrix that maps from spherical harmonics coefficients to pixels. The term ∂F/∂Ul,m can then be computed as
∂F
∂Ul,m =
√ 4π
2l + 1 G̃l Y
m l (N(V )). (28)
The derivative with respect to texture is defined by
∂I ∂ρ = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (N(V )). (29)
Note that we assume texture variations are piece-wise constant with respect to our triangle mesh discretization.
D DIFFERENTIATING SKYLIGHT PARAMETERS
To model possible outdoor daylight conditions, we use the analytical Preetham skylight model (Preetham et al., 1999). This model is calibrated by atmospheric data and parameterized by two intuitive parameters: turbidity τ , which describes the cloudiness of the atmosphere, and two polar angles θs ∈ [0, π/2], φs ∈ [0, 2π], which are encode the direction of the sun. Note that θs, φs are not the polar angles θ, φ for representing incoming light direction ω in u(ω). The spherical harmonics representation of the Preetham skylight is presented in (Habel et al., 2008) as
u(ω) = 6∑ l=0 l∑ m=−l Ul,m(θs, φs, τ)Y m l (ω).
This is derived by first performing a non-linear least squares fit to write Ul,m as a polynomial of θs and τ which lets them solve for Ũl,m(θs, τ) = Ul,m(θs, 0, τ)
Ũl,m(θs, τ) = 13∑ i=0 7∑ j=0 (pl,m)i,jθ i sτ j ,
where (pl,m)i,j are scalar coefficients, then Ul,m(θs, φs, τ) can be computed by applying a spherical harmonics rotation with φs using
Ul,m(θs, φs, τ) = Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs).
We refer the reader to (Preetham et al., 1999) for more detail. For the purposes of this article we just need the above form to compute the derivatives.
D.1 DERIVATIVES
The derivatives of the lighting with respect to the skylight parameters (θs, φs, τ) are
∂Ul,m(θs, φs, τ)
∂φs = −mŨl,m(θs, τ) sin(mφs) +mŨl,−m(θs, τ) cos(mφs) (30)
∂Ul,m(θs, φs, τ) ∂θs = ∂Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs) ∂θs (31)
= ∑ ij iθi−1s τ j(pl,m)i,j cos(mφs) + ∑ ij iθi−1s (pl,−m)i,j sin(mφs) (32)
∂Ul,m(θs, φs, τ) ∂τ = ∑ ij jθisτ j−1(pl,m)i,j cos(mφs) + ∑ ij jθisτ j−1(pl,−m)i,j sin(mφs) (33)
E DERIVATIVES OF SURFACE NORMALS
Taking the derivative of the rendered image I with respect to surface normals N is an essential task for computing the derivative of I with respect to the geometry V . Specifically, the derivative of the rendering equation Equation 25 with respect to V is
∂I ∂V = ∂ρ ∂V F + ρ ∂F ∂V (34)
= ∂ρ
∂V F + ρ
∂F
∂N
∂N ∂V (35)
We assume the texture variations are piece-wise constant with respect to our triangle mesh discretization and omit the first term ∂ρ/∂V as the magnitude is zero. Computing ∂N/∂V is provided in Section 3.2. Computing ∂F/∂Ni on face i is
∂F ∂Ni = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml ∂Ni , (36)
where the ∂Yml /∂Ni is the derivative of the spherical harmonics with respect to the face normal Ni.
To begin this derivation recall the relationship between a unit normal vector n = (nx, ny, nz) and its corresponding polar angles θ, φ
θ = cos−1 (
nz√ n2x + n 2 y + n 2 z
) φ = tan−1 ( ny nx ) ,
we can compute the derivative of spherical harmonics with respect to the normal vector through
∂Y ml (θ, φ)
∂n
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ) + P−ml (cos θ) ∂ sin(−mφ) ∂φ ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ) + Pml (cos θ) ∂ cos(mφ) ∂φ ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ)−mP−ml (cos θ) cos(−mφ) ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ)−mPml (cos θ) sin(mφ) ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
(37)
Note that the derivative of the associated Legendre polynomials Pml (cos θ) can be computed by applying the recurrence formula Dunster (2010)
∂Pml (cos θ) ∂θ = − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ) cos2 θ − 1 × (− sin θ)
= − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ)
sin θ . (38)
Thus the derivatives of polar angles (θ, φ) with respect to surface normals n = [nx, ny, nz] are
∂θ ∂n = [ ∂θ ∂nx , ∂θ ∂ny , ∂θ ∂nz ] = [ nxnz, nynz, −(n2x + n2y) ] (n2x + n 2 y + n 2 z) √ n2x + n 2 y , (39)
∂φ ∂n = [ ∂φ ∂nx , ∂φ ∂ny , ∂φ ∂nz ] = [ −ny n2x + n 2 y , nx n2x + n 2 y , 0 ] . (40)
In summary, the results of Equation 37, Equation 38, Equation 39, and Equation 40 tell us how to compute ∂Yml /∂Ni. Then the derivative of the pixel j with respect to vertex p which belongs to face i can be computed as
∂Ij ∂Vp ≈ ρj ∂F ∂Ni ∂Ni ∂Vp
= ρj n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml (θ, φ)
∂Ni
∂Ni ∂Vp . (41)
F ADVERSARIAL TRAINING IMPLEMENTATION DETAIL
Our adversarial training is based on the basic idea of injecting adversarial examples into the training set at each step and continuously updating the adversaries according to the current model parameters (Goodfellow et al., 2015; Kurakin et al., 2017). Our experiments inject 100 adversarial lighting examples to the CIFAR-100 data (≈ 0.17% of the training set) and keep updating these adversaries at each epoch.
We compute the adversarial lighting examples using the orange models collected from cgtrader.com and turbosquid.com. We uses five gray-scale background colors with intensities 0.0, 0.25, 0.5, 0.75, 1.0 to mimic images in the CIFAR-100 which contains many pure color backgrounds. Our orthographic cameras are placed at polar angle θ = π/3 with 10 uniformly sampled azimuthal angles ranging from φ = 0 to 2π. Our initial spherical harmonics lighting is the same as
other experiments, using the real-world lighting data provided in (Ramamoorthi & Hanrahan, 2001). Our stepsize for computing adversaries is 0.05 along the direction of lighting gradients. We run our adversarial lighting iterations until fooling the network or reaching the maximum 30 iterations to avoid too extreme lighting conditions, such as turning the lights off.
Our random lighting examples are constructed at each epoch by randomly perturb the lighting coefficients ranging from -0.5 to 0.5.
When training the 16-layers WideResNet (Zagoruyko & Komodakis, 2016) with wide-factor 4, we use batch size 128, learning rate 0.125, dropout rate 0.3, and the standard cross entropy loss. We implement the training using PYTORCH (Paszke et al., 2017), with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4. We train the model for 150 epochs and use the one with best accuracy on the validation set. Figure 16 shows examples of our adversarial lights at different training stages. In the early stages, the model is not robust to different lighting conditions, thus small lighting perturbations are sufficient to fool the model. In the late stages, the network becomes more robust to different lightings. Thus it requires dramatic changes to fool a model or even fail to fool the model within 30 iterations.
G EVALUATE RENDERING QUALITY
We evaluated our rendering quality by whether our rendered images are recognizable by models trained on real photographs. Although large 3D shape datasets, such as ShapeNet (Chang et al., 2015), are available, they do not have have geometries or textures at the resolutions necessary to create realistic renderings. We collected 75 high-quality textured 3D shapes from cgtrader.com and turbosquid.com to evaluate our rendering quality. We augmented the shapes by changing the field of view, backgrounds, and viewing directions, then keep the configurations that were correctly classified by a pre-trained ResNet-101 on ImageNet. Specifically, we place the centroid, calculated as the weighted average of the mesh vertices where the weights are the vertex areas, at the origin and normalize shapes to range -1 to 1; the field of view is chosen to be 2 and 3 in the same unit with the normalized shape; background images include plain colors
and real photos, which have small influence on model predictions; viewing directions are chosen to be 60 degree zenith and uniformly sampled 16 views from 0 to 2π azimuthal angle. In Figure 17, we show that the histogram of model confidence on the correct labels over 10,000 correctly classified rendered images from our differentiable renderer. The confidence is computed using softmax function and the results show that our rendering quality is faithful enough to be recognized by models trained on natural images. | 1. What is the focus of the paper regarding constructing adversarial examples?
2. What are the strengths of the proposed method, particularly in terms of computational efficiency and practicality?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the significance of the paper's contribution in the context of related research?
5. Are there any concerns or suggestions regarding the paper's discussion and comparisons with previous studies? | Review | Review
The paper demonstrates a method for constructing adversarial examples by modifications or perturbations to physical parameters in the scene itself---specifically scene lighting and object geometry---such that images taken of that scene are able to fool a classifier. It achieves this through a novel differentiable rendering engine, which allows the proposed method to back-propagate gradients to the desired physical parameters. Also interesting in the paper is the use of spherical harmonics, which restrict the algorithm to plausible lighting. The method is computationally efficient and appears to work well, generating plausible scenes that fool a classifier when imaged from different viewpoints.
Overall, I have a positive view of the paper. However, there are certain issues below that the authors should address in the rebuttal for me to remain with my score of accept (especially the first one):
- The paper has no discussion of or comparisons to the work of Athalye and Sutskever, 2017 and Zeng et al., 2017, except for a brief mention in Sec 2 that these methods also use differentiable renderers for adversarial attacks. These works address the same problem as this paper---computing physically plausible adversarial attacks---and by very similar means---back-propagation through a rendering engine. Therefore it is critical that the paper clarifies its novelty over these methods, and if appropriate, include comparisons.
- While the goal of finding physically plausible adversarial examples is indeed important, I disagree with the claim that image-level attacks are "primarily tools of basic research, and not models of real-world security scenarios". In many applications, an attacker may have access to and be able to modify images after they've been captured and prior to sending them through a classifier (e.g., those attempting to detect transmission of spam or sensitive images). I believe the paper can make its case about the importance of physical adversarial perturbations without dismissing image-level perturbations as entirely impractical.
- The Athalye 18 reference noted in Fig 1 is missing (the references section includes the reference to Athalye and Sutskever '17).
===Post-rebuttal
Thanks for addressing my questions. With the new comparisons and discussions wrt the most relevant methods, I believe the contributions of the paper are clearer. I'm revising my score from 6 to 7. |
ICLR | Title
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
Abstract
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
N/A
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
1 INTRODUCTION
Research in adversarial examples continues to contribute to the development of robust (semi-)supervised learning (Miyato et al., 2018), data augmentation (Goodfellow et al., 2015; Sun et al., 2018), and machine learning understanding (Kanbak et al., 2018). One important caveat of the approach pursued by much of the literature in adversarial machine learning, as discussed recently (Goodfellow,
2018; Gilmer et al., 2018), is the reliance on overly simplified attack metrics: namely, the use of pixel value differences between an adversary and an input image, also referred to as the pixel norm-balls.
The pixel norm-balls game considers pixel perturbations of norm-constrained magnitude (Goodfellow et al., 2015), and is used to develop adversarial attackers, defenders and training strategies. The pixel norm-ball game is attractive from a research perspective due to its simplicity and well-posedness: no knowledge of image formation is required and any arbitrary pixel perturbation remains eligible (so long as it is “small”, in the perceptual sense). Although the pixel norm-ball is useful for research purposes, it only captures limited real-world security scenarios.
Despite the ability to devise effective adversarial methods through the direct employment of optimizations using the pixel norm-balls measure, the pixel manipulations they promote are divorced from the types of variations present in the real world, limiting their usefulness “in the wild”. Moreover, this methodology leads to defenders that are only effective when defending against unrealistic images/attacks, not generalizing outside of the space constrained by pixel norm-balls. In order to consider conditions that enable adversarial attacks in the real world, we advocate for a new measurement norm that is rooted in the physical processes that underly realistic image synthesis, moving away from overly simplified metrics, e.g., pixel norm-balls.
Our proposed solution – parametric norm-balls – rely on perturbations of physical parameters of a synthetic image formation model, instead of pixel color perturbations (Figure 2). To achieve this, we use a physically-based differentiable renderer which allows us to perturb the underlying parameters of the image formation process. Since these parameters indirectly control pixel colors, perturbations in this parametric space implicitly span the space of natural images. We will demonstrate two advantages that fall from considering perturbations in this parametric space: (1) they enable adversarial approaches that more readily apply to real-world applications, and (2) they permit the use of much more significant perturbations (compared to pixel norms), without invalidating the realism of the resulting image (Figure 1). We validate that parametric norm-balls game playing is critical for a variety of important adversarial tasks, such as building defenders robust to perturbations that can occur naturally in the real world.
We perform perturbations in the underlying image formation parameter space using a novel physicallybased differentiable renderer. Our renderer analytically computes the derivatives of pixel color with respect to these physical parameters, allowing us to extend traditional pixel norm-balls to physicallyvalid parametric norm-balls. Notably, we demonstrate perturbations on an environment’s lighting and on the shape of the 3D geometry it shades. Our differentiable renderer achieves state-of-the-art performance in speed and scalability (Section 3) and is fast enough for rendered adversarial data augmentation (Section 5): training augmented with adversarial images generated with a renderer.
Existing differentiable renders are slow and do not scalable to the volume of high-quality, highresolutions images needed to make adversarial data augmentation tractable (Section 2). Given our analytically-differentiable renderer (Section 3), we are able to demonstrate the efficacy of parametric space perturbations for generating adversarial examples. These adversaries are based on a substantially different phenomenology than their pixel norm-balls counterparts (Section 4). Ours is among the first steps towards the deployment of rendered adversarial data augmentation in real-world applications: we train a classifier with computer-generated adversarial images, evaluating the performance of the training against real photographs (i.e., captured using cameras; Section 5). We test on real photos to show the parametric adversarial data augmentation increases the classifier’s robustness to “deformations” happened in the real world. Our evaluation differs from the majority of existing literature which evaluates against computer-generated adversarial images, since our parametric space perturbation is no-longer a wholly idealized representation of the image formation model but, instead, modeled against of theory of realistic image generation.
2 RELATED WORK
Our work is built upon the fact that simulated or rendered images can participate in computer vision and machine learning on real-world tasks. Many previous works use rendered (simulated) data to train deep networks, and those networks can be deployed to real-world or even outperform the state-of-the-art networks trained on real photos (Movshovitz-Attias et al., 2016; Chen et al., 2016; Varol et al., 2017; Su et al., 2015; Johnson-Roberson et al., 2017; Veeravasarapu et al., 2017b; Sadeghi & Levine, 2016; James & Johns, 2016). For instance, Veeravasarapu et al. (2017a) show that training with 10% real-world data and 90% simulation data can reach the level of training with full real data. Tremblay et al. (2018) even demonstrate that the network trained on synthetic data yields a better performance than using real data alone. As rendering can cheaply provide a theoretically infinite supply of annotated input data, it can generate data which is orders of magnitude larger than existing datasets. This emerging trend of training on synthetic data provides an exciting direction for future machine learning development. Our work complements these works. We demonstrate the utility of rendering can be used to study the potential danger lurking in misclassification due to subtle changes to geometry and lighting. This provides a future direction of combining with synthetic data generation pipelines to perform physically based adversarial training on synthetic data.
Adversarial Examples Szegedy et al. (2014) expose the vulnerability of modern deep neural nets using purposefully-manipulated images with human-imperceptible misclassification-inducing noise. Goodfellow et al. (2015) introduce a fast method to harness adversarial examples, leading to the idea of pixel norm-balls for evaluating adversarial attackers/defenders. Since then, many significant developments in adversarial techniques have been proposed (Akhtar & Mian, 2018; Szegedy et al., 2014; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018; Papernot et al., 2017; Moosavi-Dezfooli et al., 2017; Chen et al., 2017; Su et al., 2017). Our work extends this progression in constructing adversarial examples, a problem that lies at the foundation of adversarial machine learning. Kurakin et al. (2016) study the transferability of attacks to the physical world by printing then photographing adversarial images. Athalye et al. (2017) and Eykholt et al. (2018) propose extensions to non-planar (yet, still fixed) geometry and multiple viewing angles. These works still rely fundamentally on the direct pixel or texture manipulation on physical objects. Since these methods assume independence between pixels in the image or texture space they remain variants of pixel norm-balls. This leads to unrealistic attack images that cannot model real-world scenarios (Goodfellow, 2018; Hendrycks & Dietterich, 2018; Gilmer et al., 2018). Zeng et al. (2017) generate adversarial examples by altering physical parameters using a rendering network (Liu et al., 2017) trained to approximate the physics of realistic image formation. This data-driven approach leads to an image formation model biased towards the rendering style present in the training data. This method also relies on differentiation through the rendering network in order to compute adversaries, which requires high-quality training on a large amount of data. Even with perfect training, in their reported performance, it still requires 12 minutes on average to find new adversaries, we only take a few seconds Section 4.1. Our approach is based on a differentiable physically-based renderer that directly (and, so, more convincingly) models the image formation process, allowing us to alter physical parameters – like geometry and lighting – and compute
derivatives (and adversarial examples) much more rapidly compared to the (Zeng et al., 2017). We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1.
Differentiable Renderer Applying parametric norm-balls requires that we differentiate the image formation model with respect to the physical parameters of the image formation model. Modern realistic computer graphics models do not expose facilities to directly accommodate the computation of derivatives or automatic differentiation of pixel colors with respect to geometry and lighting variables. A physically-based
differentiable renderer is fundamental to computing derivative of pixel colors with respect to scene parameters and can benefit machine learning in several ways, including promoting the development of novel network architectures (Liu et al., 2017), in computing adversarial examples (Athalye et al., 2017; Zeng et al., 2017), and in generalizing neural style transfer to a 3D context (Kato et al., 2018; Liu et al., 2018). Recently, various techniques have been proposed to obtain these derivatives: Wu et al. (2017); Liu et al. (2017); Eslami et al. (2016) use neural networks to learn the image formation process provided a large amount of input/output pairs. This introduces unnecessary bias in favor of the training data distribution, leading to inaccurate derivatives due to imperfect learning. Kato et al. (2018) propose a differentiable renderer based on a simplified image formation model and an underlying linear approximation. Their approach requires no training and is unbiased, but their approximation of the image formation and the derivatives introduce more errors. Loper & Black (2014); Genova et al. (2018) use automatic differentiation to build fully differentiable renderers. These renderers, however, are expensive to evaluate, requiring orders of magnitude more computation and much larger memory footprints compared to our method.
Our novel differentiable renderer overcomes these limitations by efficiently computing analytical derivatives of a physically-based image formation model. The key idea is that the non-differentiable visibility change can be ignored when considering infinitesimal perturbations. We model image variations by changing geometry and realistic lighting conditions in an analytically differentiable manner, relying on an accurate model of diffuse image formation that extend spherical harmonicsbased shading methods (Appendix C). Our analytic derivatives are efficient to evaluate, have scalable memory consumption, are unbiased, and are accurate by construction (Table 2). Our renderer explicitly models the physics of the image formation processes, and so the images it generates are realistic enough to illicit correct classifications from networks trained on real-world photographs.
3 ADVERSARIAL ATTACKS IN PARAMETRIC SPACES
Adversarial attacks based on pixel norm-balls typically generate adversarial examples by defining a cost function over the space of images C : I → R that enforces some intuition of what failure should look like, typically using variants of gradient descent where the gradient ∂C/∂I is accessible by differentiating through networks (Szegedy et al., 2014; Goodfellow et al., 2015; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018).
The choices for C include increasing the cross-entropy loss of the correct class (Goodfellow et al., 2015), decreasing the cross-entropy loss of the least-likely class (Kurakin et al., 2017), using a combination of cross-entropies (Moosavi Dezfooli et al., 2016), and more (Szegedy et al., 2014; Rozsa et al., 2016; Dong et al., 2018; Tramèr et al., 2017). We combine of cross-entropies to provide flexibility for choosing untargeted and targeted attacks by specifying a different set of labels:
C ( I(U, V ) ) = −CrossEntropy ( f(I(U, V )), Ld ) + CrossEntropy ( f(I(U, V )), Li ) , (1)
where I is the image, f(I) is the output of the classifier, Ld, Li are labels which a user wants to decrease and increase the predicted confidences respectively. In our experiments, Ld is the correct class and Li is either ignored or chosen according to user preference. Our adversarial attacks in the parametric space consider an image I(U, V ) is the function of physical parameters of the image formation model, including the lighting U and the geometry V . Adversarial examples constructed by perturbing physical parameters can then be computed via the chain rule
∂C ∂U = ∂C ∂I ∂I ∂U
∂C ∂V = ∂C ∂I ∂U ∂V , (2)
where ∂I/∂U, ∂I/∂V are derivatives with respect to the physical parameters and we evaluate using our physically based differentiable renderer. In our experiments, we use gradient descent for finding parametric adversarial examples where the gradient is the direction of ∂I/∂U, ∂I/∂V .
3.1 PHYSICALLY BASED DIFFERENTIABLE RENDERER
Rendering is the process of generating a 2D image from a 3D scene by simulating the physics of light. Light sources in the scene emit photons that then interact with objects in the scene. At each interaction, photons are either reflected, transmitted or absorbed, changing trajectory and repeating until arriving at a sensor such as a camera. A physically based renderer models the interactions mathematically (Pharr et al., 2016), and our task is to analytically differentiate the physical process.
We develop our differentiable renderer with common assumptions in real-time rendering (AkenineMoller et al., 2008) – diffuse material, local illumination, and distant light sources. Our diffuse material assumption considers materials which reflect lights uniformly for all directions, equivalent to considering non-specular objects. We assume that variations in the material (texture) are piece-wise constant with respect to our triangle mesh discretization. The local illumination assumption only considers lights that bounce directly from the light source to the camera. Lastly, we assume light sources are far away from the scene, allowing us to represent lighting with one spherical function. For a more detailed rationale of our assumptions, we refer readers to Appendix B).
These assumptions simplify the complicated integral required for rendering (Kajiya, 1986) and allow us to represent lighting in terms of spherical harmonics, an orthonormal basis for spherical functions analogous to Fourier transformation. Thus, we can analytically differentiate the rendering equation to acquire derivatives with respect to lighting, geometry, and texture (derivations found in Appendix C).
Using analytical derivatives avoids pitfalls of previous differentiable renderers (see Section 2) and make our differentiable renderer orders of magnitude faster than the previous fully differentiable renderer OPENDR (Loper & Black, 2014) (see Figure 3). Our approach is scalable to handle problems with more than
100,000 variables, while OPENDR runs out of memory for problems with more than 3,500 variables.
3.2 ADVERSARIAL LIGHTING AND GEOMETRY
Adversarial lighting denotes adversarial examples generated by changing the spherical harmonics lighting coefficients U (Green, 2003). As our differentiable renderer allows us to compute ∂I/∂U analytically (derivation is provided in Appendix C.4), we can simply apply the chain rule:
U ← U − γ ∂C ∂I ∂I ∂U , (3)
where ∂C/∂I is the derivative of the cost function with respect to pixel colors and can be obtained by differentiating through the network. Spherical harmonics act as an implicit constraint to prevent unrealistic lighting because natural lighting environments everyday life are dominated by lowfrequency signals. For instance, rendering of diffuse materials can be approximated with only 1% pixel intensity error by the first 2 orders of spherical harmonics (Ramamoorthi & Hanrahan, 2001). As computers can only represent a finite number of coefficients, using spherical harmonics for lighting implicitly filters out high-frequency, unrealistic lightings. Thus, perturbing the parametric space of spherical harmonics lighting gives us more realistic compared to image-pixel perturbations Figure 1.
Adversarial geometry is an adversarial example computed by changes the position of the shape’s surface. The shape is encoded as a triangle mesh with |V | vertices and |F | faces, surface points are vertex positions V ∈ R|V |×3 which determine per-face normals N ∈ R|F |×3 which in turn determine the shading of the surface. We can compute adversarial shapes by applying the chain rule:
V ← V − γ ∂C ∂I ∂I ∂N ∂N ∂V , (4)
ni hij
vj where ∂I/∂N is computed via a derivation in Appendix E. Each triangle only has one normal on its face, making ∂N/∂V computable analytically. In particular, the 3× 3 Jacobian of a unit face normal vector ni ∈ R3 of the jth face of the triangle mesh V with respect to one of its corner vertices vj ∈ R3 is
∂ni ∂vj = hijn
T i
‖hij‖2 ,
where hij ∈ R3 is the height vector: the shortest vector to the corner vj from the opposite edge.
4 RESULTS AND EVALUATION
We have described how to compute adversarial examples by parametric perturbations, including lighting and geometry. In this section, we show that adversarial examples exist in the parametric spaces, then we analyze the characteristics of those adversaries and parametric norm-balls.
We use 49× 3 spherical harmonics coefficients to represent environment lighting, with an initial realworld lighting condition (Ramamoorthi & Hanrahan, 2001). Camera parameters and the background images are empirically chosen to have correct initial classifications and avoid synonym sets. In Figure 4 we show that single-view adversarial lighting attack can fool the classifier (pre-trained ResNet-101 on ImageNet (He et al., 2016)). Figure 5 shows multi-view adversarial lighting, which optimizes the summation of the cost functions for each view, thus the gradient is computed as the summation over all camera views:
U ← U − ∑
i∈cameras γ ∂C ∂Ii ∂Ii ∂U . (5)
If one is interested in a more specific subspace, such as outdoor lighting conditions governed by sunlight and weather, our adversarial lighting can adapt to it. In Figure 7, we compute adversarial lights over the space of skylights by applying one more chain rule to the Preetham skylight parameters (Preetham et al., 1999; Habel et al., 2008). Details about taking these derivatives are provided in Appendix D. Although adversarial skylight exists, its low degrees of freedom (only three parameters) makes it more difficult to find adversaries.
In Figure 8 and Figure 9 we show the existence of adversarial
geometry in both single-view and multi-view cases. Note that we upsample meshes to have >10K vertices as a preprocessing step to increase the degrees of freedom available for perturbations. Multiview adversarial geometry enables us to perturb the same 3D shape from different viewing directions, which enables us to construct a deep optical illusion: The same 3D shape are classified differently from different angles. To create the optical illusion in Figure 6, we only need to specify the Li in Equation (1) to be a dog and a cat for two different views.
4.1 PROPERTIES OF PARAMETRIC NORM-BALLS AND ADVERSARIES
To further understand parametric adversaries, we analyze how do parametric adversarial examples generalize to black-box models. In Table 3, we test 5,000 ResNet parametric adversaries on unseen networks including AlexNet (Krizhevsky et al., 2012), DenseNet (Huang et al., 2017), SqueezeNet (Iandola et al., 2016), and VGG (Simonyan & Zisserman, 2014). Our result shows that parametric adversarial examples also share across models.
In addition to different models, we evaluate parametric adversaries on black-box viewing directions. This evaluation mimics the real-world scenario that a self-driving car would “see” a stop sign from different angles while driving. In Table 4, we randomly sample 500 correctly classified views for a given shape and perform adversarial lighting and geometry algorithms only on a subset of views, then evaluate the resulting adversarial lights/shapes on all the views. The results show that adversarial lights are more generalizable to fool unseen views; adversarial shapes, yet, are less generalizable.
Switching from pixel norm-balls to parametric norm-balls only requires to change the normconstraint from the pixel color space to the parametric space. For instance, we can perform a quantitative comparison between parametric adversarial and random perturbations in Figure 10. We use L∞-norm = 0.1 to constraint the perturbed magnitude of each lighting coefficient, and L∞-norm = 0.002 to constrain the maximum displacement of surface points along each axis. The results show how many parametric adversaries can fool the classifier out of 10,000 adversarial lights and shapes respectively. Not only do the
parametric norm-balls show the effectiveness of adversarial perturbation, evaluating robustness using parametric norm-balls has real-world implications.
Table 3: We evaluate ResNet adversaries on unseen models and show that parametric adversarial examples also share across models. The table shows the success rate of attacks (%).
Alex VGG Squeeze Dense
Lighting 81.2% 65.0% 78.6% 43.5% Geometry 70.3% 58.9% 71.1% 40.1%
Table 4: We compute parametric adversaries using a subset of views (#Views) and evaluate the success rates (%) of attacks on unseen views.
#Views 0 1 5
Lighting 0.0% 29.4% 64.2% Geometry 0.0% 0.6% 3.6%
Time (sec.) Adversarial Lighting
105 106
54K vertices 25K vertices 15K vertices
105 106
54K vertices 25K vertices 15K vertices 1 .1
.01
Adversarial Geometry
#pixel Runtime The inset presents our runtime per iteration for computing derivatives. An adversary normally requires less than 10 iterations, thus takes a few seconds. We evaluate our CPU PYTHON implementation and the OPENGL rendering, on an Intel Xeon 3.5GHz CPU with 64GB of RAM and an NVIDIA GeForce GTX 1080. Our runtime depends on the number of pixels requiring derivatives.
5 RENDERED ADVERSARIAL DATA AUGMENTATION AGAINST REAL PHOTOS
We inject adversarial examples, generated using our differentiable renderer, into the training process of modern image classifiers. Our goal is to increase the robustness of these classifiers to real-world perturbations. Traditionally, adversarial training is evaluated against computer-generated adversarial images (Kurakin et al., 2017; Madry et al., 2018; Tramèr et al., 2017). In contrast, our evaluation differs from the majority of the literature, as we evaluate performance against real photos (i.e., images captured using a camera), and not computer-generated images. This evaluation method is motivated by our goal of increasing a classifier’s robustness to “perturbations” that occur in the real world and result from the physical processes underlying real-world image formation. We present preliminary steps towards this objective, resolving the lack of realism of pixel norm-balls and evaluating our augmented classifiers (i.e., those trained using our rendered adversaries) against real photographs.
Training We train the WideResNet (16 layers, 4 wide factor) (Zagoruyko & Komodakis, 2016) on CIFAR-100 (Krizhevsky & Hinton, 2009) augmented with adversarial lighting examples. We apply a common adversarial training method that adds a fixed number of adversarial examples each epoch (Goodfellow et al., 2015; Kurakin et al., 2017). We refer readers to Appendix F for the training detail. In our experiments, we compare three training scenarios: (1) CIFAR-100, (2) CIFAR-100 + 100 images under random lighting, and (3) CIFAR-100 + 100 images under adversarial lighting. Comparing to the accuracy reported in (Zagoruyko & Komodakis, 2016), WideResNets trained on these three cases all have comparable performance (≈ 77%) on the CIFAR-100 test set.
Testing We create a test set of real photos, captured in a laboratory setting with controlled lighting and camera parameters: we photographed oranges using a calibrated Prosilica GT 1920 camera under different lighting conditions, each generated by projecting different lighting patterns using an LG PH550 projector. This hardware lighting setup projects lighting patterns from a fixed solid angle of directions onto the scene objects. Figure 11 illustrates samples from the 500 real photographs of our dataset. We evaluate the robustness of our classifier models according to test accuracy. Of note, average prediction accuracies over five trained WideResNets on our test data under the three training cases are (1) 4.6%, (2) 40.4%, and (3) 65.8%. This result
supports the fact that training on rendered images can improve the networks’ performance on real photographs. Our preliminary experiments motivate the potential of relying on rendered adversarial training to increase the robustness to visual phenomena present in the real-world inputs.
6 LIMITATIONS & FUTURE WORK
Using parametric norm-balls to remove the lack of realism of pixel norm-balls is only the first step to bring adversarial machine learning to real-world. More evaluations beyond the lab experimental data could uncover the potential of the rendered adversarial data augmentation. Coupling the differentiable renderer with methods for reconstructing 3D scenes, such as (Veeravasarapu et al., 2017b; Tremblay et al., 2018), has the potential to develop a complete pipeline for rendered adversarial training. We can take a small set of real images, constructing 3D virtual scenes which have real image statistics, using our approach to manipulate the predicted parameters to construct the parametric adversarial
examples, then perform rendered adversarial training. This direction has the potential to produce limitless simulated adversarial data augmentation for real-world tasks.
Our differentiable renderer models the change of realistic environment lighting and geometry. Incorporating real-time rendering techniques from the graphics community could further improve the quality of rendering. Removing the locally constant texture assumption could improve our results. Extending the derivative computation to materials could enable “adversarial materials”. Incorporating derivatives of the visibility change and propagating gradient information to shape skeleton could also create “adversarial poses”. These extensions offer a set of tools for modeling real security scenarios. For instance, we can train a self-driving car classifier that can robustly recognize pedestrians under different poses, lightings, and cloth deformations.
ACKNOWLEDGMENTS
This work is funded in part by NSERC Discovery Grants (RGPIN–2017–05235 & RGPAS–2017–507938), Connaught Funds (NR2016–17), the Canada Research Chairs Program, the Fields Institute, and gifts by Adobe Systems Inc., Autodesk Inc., MESH Inc. We thank members of Dynamic Graphics Project for feedback and draft reviews; Wenzheng Chen for photography equipments; Colin Raffel and David Duvenaud for discussions and feedback.
Supplementary Material
A COMPARISON BETWEEN PERTURBATION SPACES
We extend our comparisons against pixel norm-balls methods (Figure 1) by visualizing the results and the generated perturbations (Figure 12). We hope this figure elucidates that our parametric perturbation are more realistic several scales of perturbations.
B PHYSICALLY BASED RENDERING
Physically based rendering (PBR) seeks to model the flow of light, typically the assumption that there exists a collection of light sources that generate light; a camera that receives this light; and a scene that modulates the flow light between the light sources and camera (Pharr et al., 2016). What follows is a brief discussion of the general task of rendering an image from a scene description and the approximations we take in order to make our renderer efficient yet differentiable.
Computer graphics has dedicated decades of effort into developing methods and technologies to enable PBR to synthesize of photorealistic images under a large gamut of performance
requirements. Much of this work is focused around taking approximations of the cherished Rendering equation (Kajiya, 1986), which describes the propagation of light through a point in space. If we let uo be the output radiance, p be the point in space, ωo be the output direction, ue be the emitted radiance, ui be incoming radiance, ωi be the incoming angle, fr be the way light be reflected off the material at that given point in space we have:
uo(p, ωo) = ue(p, ωo) + ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi.
From now on we will ignore the emission term ue as it is not pertinent to our discussion. Furthermore, because the speed of light is substantially faster than the exposure time of our eyes, what we perceive is not the propagation of light at an instant, but the steady state solution to the rendering equation evaluated at every point in space. Explicitly computing this steady state is intractable for our applications and will mainly serve as a reference for which to place a plethora of assumptions and simplifications we will make for the sake of tractability. Many of these methods focus on ignoring light with nominal effects on the final rendered image vis a vis assumptions on the way light travels. For instance, light is usually assumed to have nominal interacts with air, which is described as the assumption that the space between objects is a vacuum, which constrains the interactions of light to the objects in a scene. Another common assumption is that light does not penetrate objects, which makes it difficult to render objects like milk and human skin1. This constrains the complexity of light propagation to the behavior of light bouncing off of object surfaces.
B.1 LOCAL ILLUMINATION
It is common to see assumptions that limit number of bounces light is allowed.In our case we chose to assume that the steady state is sufficiently approximated by an extremely low number of iterations: one. This means that it seems sufficient to model the lighting of a point in space by the light sent to it directly by light sources. Working with such a strong simplification does, of course, lead to a few artifacts. For instance, light occluded by other objects is ignored so shadows disappear and auxiliary techniques are usually employed to evaluate shadows (Williams, 1978; Miller, 1994).
When this assumption is coupled with a camera we approach what is used in standard rasterization systems such as OPENGL (Shreiner & Group, 2009), which is what we use. These systems compute the illumination of a single pixel by determining the fragment of an object visible through that pixel and only computing the light that traverses directly from the light sources, through that fragment, to that pixel. The lighting of a fragment is therefore determined by a point and the surface normal at that point, so we write the fragment’s radiance as R(p,n, ωo) = uo(p, ωo):
R(p,n, ωo) = ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi. (6)
B.2 LAMBERTIAN MATERIAL
Lambertian Non-Lambertian
Figure 15: We consider the Lambertian material (left) where lights get reflected uniformly in every direction.
Each point on an object has a model approximating the transfer of incoming light to a given output direction fr, which is usually called the material. On a single object the material parameters may vary quite a bit and the correspondence between points and material parameters is usually called the texture map which forms the texture of an object. There exists a wide gamut of material models, from mirror materials that transport light from a single input direction to a single output direction, to materials that reflect light evenly in all directions, to materials liked brushed metal that reflect differently along different angles. For the sake
of document we only consider diffuse materials, also called Lambertian materials, where we assume
1this is why simple renderers make these sorts of objects look like plastic
that incoming light is reflected uniformly, i.e fr is a constant function with respect to angle, which we denote fr(p, ωi, ωo) = ρ(p):
R(p,n) = ρ(p) ∫ Ω(n) u(p, ω)(ω · n)dω. (7)
This function ρ is usually called the albedo, which can be perceived as color on the surface for diffuse material, and we reduce our integration domain to the upper hemisphere Ω(n) in order to model light not bouncing through objects. Furthermore, since only the only ω and u are the incoming ones we can now suppress the “incoming” in our notation and just use ω and u respectively.
B.3 ENVIRONMENT MAPPING
The illumination of static, distant objects such as the ground, the sky, or mountains do not change in any noticeable fashion when objects in a scene are moved around, so u can be written entirely in terms of ω, u(p, ω) = u(ω). If their illumination forms a constant it seems prudent to pre-compute or cache their contributions to the illumination of a scene. This is what is usually called environment mapping and they fit in the rendering equation as a representation for the total lighting of a scene, i.e the total incoming radiance ui. Because the environment is distant, it is common to also assume that the position of the object receiving light from an environment map does not matter so this simplifies ui to be independent of position:
R(p,n) = ρ(p) ∫ Ω(n) u(ω) (ω · n) dω. (8)
B.4 SPHERICAL HARMONICS
Despite all of our simplifications, the inner integral is still a fairly generic function over S2. Many techniques for numerically integrating the rendering equation have emerged in the graphics community and we choose one which enables us to perform pre-computation and select a desired spectral accuracy: spherical harmonics. Spherical harmonics are a basis on S2 so, given a spherical harmonics expansion of the integrand, the evaluation of the above integral can be reduced to a weighted product of coefficients. This particular basis is chosen because it acts as a sort of Fourier basis for functions on the sphere and so the bases are each associated with a frequency, which leads to a convenient multi-resolution structure. In fact, the rendering of diffuse objects under distant lighting can be 99% approximated by just the first few spherical harmonics bases (Ramamoorthi & Hanrahan, 2001).
We will only need to note that the spherical harmonics bases Y ml are denoted with the subscript with l as the frequency and that there are 2l + 1 functions per frequency, denoted by superscripts m between −l to l inclusively. For further details on them please take a glance at Appendix C. If we approximate a function f in terms of spherical harmonics coefficients f ≈ ∑ lm fl,mY m l the integral can be precomputed as∫ S2 f ≈ ∫ S2 ∑ lm fl,mY m l = ∑ lm fl,m ∫ S2 Y ml , (9)
Thus we have defined a reduced rendering equation that can be efficiently evaluated using OPENGL while maintaining differentiability with respect to lighting and vertices. In the following appendix we will derive the derivatives necessary to implement our system.
C DIFFERENTIABLE RENDERER
Rendering computes an image of a 3D shape given lighting conditions and the prescribed material properties on the surface of the shape. Our differentiable renderer assumes Lambertian reflectance, distant light sources, local illumination, and piece-wise constant textures. We will discuss how to explicitly compute the derivatives used in the main body of this text. Here we give a detailed discussion about spherical harmonics and their advantages.
C.1 SPHERICAL HARMONICS
Spherical harmonics are usually defined in terms of the Legendre polynomials, which are a class of orthogonal polynomials defined by the recurrence relation
P0 = 1 (10) P1 = x (11)
(l + 1)Pl+1(x) = (2l + 1)xPl(x)− lPl−1(x). (12)
The associated Legendre polynomials are a generalization of the Legendre polynomials and can be fully defined by the relations
P 0l = Pl (13) (l −m+ 1)Pml+1(x) = (2l + 1)xPml (x)− (l +m)Pml−1(x) (14)
2mxPml (x) = − √ 1− x2 [ Pm+1l (x) + (l +m)(l −m+ 1)P m−1 l (x) ] . (15)
Using the associated Legendre polynomials Pml we can define the spherical harmonics basis as
Y ml (θ, φ) = K m l (−1)m √ 2P−ml (cos θ) sin(−mφ) m < 0 (−1)m √ 2Pml (cos θ) cos(mφ) m > 0
P 0l (cos θ) m = 0
. (16)
where Kml =
√ (2l + 1)(l − |m|)!
4π(l + |m|)! . (17)
We will use the fact that the associated Legendre polynomials correspond to the spherical harmonics bases that are rotationally symmetric along the z axis (m = 0).
In order to incorporate spherical harmonics into Equation 8, we change the integral domain from the upper hemisphere Ω(n) back to S2 via a max operation
R(p,n) = ρ(p) ∫ Ω(n) u(ω)(ω · n)dω (18)
= ρ(p) ∫ S2 u(ω) max(ω · n, 0)dω. (19)
We see that the integral is comprised of two components: a lighting component u(ω) and a component that depends on the normal max(ω · n, 0). The strategy is to pre-compute the two components by projecting onto spherical harmonics, and evaluating the integral via a dot product at runtime, as we will now derive.
C.2 LIGHTING IN SPHERICAL HARMONICS
Approximating the lighting component u(ω) in Equation 19 using spherical harmonics Y ml up to band n can be written as
u(ω) ≈ n∑ l=0 l∑ m=−l Ul,mY m l (ω),
where Ul,m ∈ R are coefficients. By using the orthogonality of spherical harmonics we can use evaluate these coefficients as an integral between u(ω) and Y ml (ω)
Ul,m = 〈u, Y ml 〉S2 = ∫ S2 u(ω)Y ml (ω)dω,
which can be evaluated via quadrature.
C.3 CLAMPED COSINE IN SPHERICAL HARMONICS
So far, we have projected the lighting term u(ω) onto the spherical harmonics basis. To complete evaluating Equation 19 we also need to approximate the second component max(ω ·n, 0) in spherical
harmonics. This is the so-called the clamped cosine function.
g(ω,n) = max(ω · n, 0) = n∑ l=0 l∑ m=−l Gl,m(n)Y m l (ω),
where Gl,m(n) ∈ R can be computed by projecting g(ω,n) onto Y ml (ω)
Gl,m(n) = ∫ S2 max(ω · n, 0)Y ml (ω)dω.
Unfortunately, this formulation turns out to be tricky to compute. Instead, the common practice is to analytically compute the coefficients for unit z direction G̃l,m = Gl,m(nz) = Gl,m([0, 0, 1]ᵀ) and evaluate the coefficients for different normals Gl,m(n) by rotating G̃l,m. This rotation, G̃l,m, can be computed analytically:
G̃l,m = ∫ S2 max(ω · nz, 0)Y ml (ω)dω
= ∫ 2π 0 ∫ π 0 max([sin θ cosφ, sin θ sinφ, cos θ][0, 0, 1]ᵀ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π 0 max(cos θ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π/2 0 cos θ Y ml (θ, φ) sin θdθdφ. (20)
In fact, because max(ω ·nz, 0) is rotationally symmetric around the z-axis, its projection onto Y ml (ω) will have many zeros except the rotationally symmetric spherical harmonics Y 0l . In other words, G̃l,m is non-zero only when m = 0. So we can simplify Equation 20 to
G̃l = G̃l,0 = 2π ∫ π/2 0 cos θ Y 0l (θ) sin θdθ.
The evaluation of this integral can be found in Appendix A in (Basri & Jacobs, 2003). We provide this here as well:
G̃l = √ π 2 l = 0√ π 3 l = 1 (−1) l2 +1 (l−2)! √ (2l+1)π 2l( l2−1)!( l 2 +1)!
l ≥ 2, even 0 l ≥ 2, odd
.
The spherical harmonics coefficients Gl,m(n) of the clamped cosine function g(ω,n) can be computed by rotating G̃l (Sloan et al., 2005) using this formula
Gl,m(n) =
√ 4π
2l + 1 G̃l Y
m l (n). (21)
So far we have projected the two terms in Equation 19 into the spherical harmonics basis. Orthogonality of spherical harmonics makes the evaluation of this integral straightforward:∫
S2 u(ω) max(ω · n, 0)dω = ∫ S2 [∑ l,m Ul,mY m l (ω) ][∑ j,k Gj,k(n)Y k j (ω) ] dω
= ∑ j,k,l,m Ul,mGj,k(n)δ l jδ m k (22)
= ∑ l,m Ul,mGl,m(n). (23)
This, in conjunction with Equation 21allows us to derive the rendering equation using spherical harmonics lighting for Lambertian objects:
R(p,n) = ρ(p) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (n). (24)
So far we have only considered the shading of a specific point p with surface normal n. If we consider the rendered image I given a shape V , lighting U , and camera parameters η, the image I is the evaluation of the rendering equation R of each point in V visible through each pixel in the image. This pixel to point mapping is determined by η. Therefore, we can write I as
I(V,U, η) = ρ(V, η) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y m l (N(V ))︸ ︷︷ ︸
F (V,U)
, (25)
where N(V ) is the surface normal. We exploit the notation and use ρ(V, η) to represent the texture of V mapped to the image space through η.
C.4 LIGHTING AND TEXTURE DERIVATIVES
For our applications we must differentiate Equation 25 with respect to lighting and material parameters. The derivative with respect to the lighting coefficients U can be obtained by
∂I ∂U = ∂ρ ∂U F + ρ ∂F ∂U (26)
= 0 + ρ n∑ l=0 l∑ m=−l ∂F ∂Ul,m . (27)
This is the Jacobian matrix that maps from spherical harmonics coefficients to pixels. The term ∂F/∂Ul,m can then be computed as
∂F
∂Ul,m =
√ 4π
2l + 1 G̃l Y
m l (N(V )). (28)
The derivative with respect to texture is defined by
∂I ∂ρ = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (N(V )). (29)
Note that we assume texture variations are piece-wise constant with respect to our triangle mesh discretization.
D DIFFERENTIATING SKYLIGHT PARAMETERS
To model possible outdoor daylight conditions, we use the analytical Preetham skylight model (Preetham et al., 1999). This model is calibrated by atmospheric data and parameterized by two intuitive parameters: turbidity τ , which describes the cloudiness of the atmosphere, and two polar angles θs ∈ [0, π/2], φs ∈ [0, 2π], which are encode the direction of the sun. Note that θs, φs are not the polar angles θ, φ for representing incoming light direction ω in u(ω). The spherical harmonics representation of the Preetham skylight is presented in (Habel et al., 2008) as
u(ω) = 6∑ l=0 l∑ m=−l Ul,m(θs, φs, τ)Y m l (ω).
This is derived by first performing a non-linear least squares fit to write Ul,m as a polynomial of θs and τ which lets them solve for Ũl,m(θs, τ) = Ul,m(θs, 0, τ)
Ũl,m(θs, τ) = 13∑ i=0 7∑ j=0 (pl,m)i,jθ i sτ j ,
where (pl,m)i,j are scalar coefficients, then Ul,m(θs, φs, τ) can be computed by applying a spherical harmonics rotation with φs using
Ul,m(θs, φs, τ) = Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs).
We refer the reader to (Preetham et al., 1999) for more detail. For the purposes of this article we just need the above form to compute the derivatives.
D.1 DERIVATIVES
The derivatives of the lighting with respect to the skylight parameters (θs, φs, τ) are
∂Ul,m(θs, φs, τ)
∂φs = −mŨl,m(θs, τ) sin(mφs) +mŨl,−m(θs, τ) cos(mφs) (30)
∂Ul,m(θs, φs, τ) ∂θs = ∂Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs) ∂θs (31)
= ∑ ij iθi−1s τ j(pl,m)i,j cos(mφs) + ∑ ij iθi−1s (pl,−m)i,j sin(mφs) (32)
∂Ul,m(θs, φs, τ) ∂τ = ∑ ij jθisτ j−1(pl,m)i,j cos(mφs) + ∑ ij jθisτ j−1(pl,−m)i,j sin(mφs) (33)
E DERIVATIVES OF SURFACE NORMALS
Taking the derivative of the rendered image I with respect to surface normals N is an essential task for computing the derivative of I with respect to the geometry V . Specifically, the derivative of the rendering equation Equation 25 with respect to V is
∂I ∂V = ∂ρ ∂V F + ρ ∂F ∂V (34)
= ∂ρ
∂V F + ρ
∂F
∂N
∂N ∂V (35)
We assume the texture variations are piece-wise constant with respect to our triangle mesh discretization and omit the first term ∂ρ/∂V as the magnitude is zero. Computing ∂N/∂V is provided in Section 3.2. Computing ∂F/∂Ni on face i is
∂F ∂Ni = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml ∂Ni , (36)
where the ∂Yml /∂Ni is the derivative of the spherical harmonics with respect to the face normal Ni.
To begin this derivation recall the relationship between a unit normal vector n = (nx, ny, nz) and its corresponding polar angles θ, φ
θ = cos−1 (
nz√ n2x + n 2 y + n 2 z
) φ = tan−1 ( ny nx ) ,
we can compute the derivative of spherical harmonics with respect to the normal vector through
∂Y ml (θ, φ)
∂n
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ) + P−ml (cos θ) ∂ sin(−mφ) ∂φ ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ) + Pml (cos θ) ∂ cos(mφ) ∂φ ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ)−mP−ml (cos θ) cos(−mφ) ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ)−mPml (cos θ) sin(mφ) ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
(37)
Note that the derivative of the associated Legendre polynomials Pml (cos θ) can be computed by applying the recurrence formula Dunster (2010)
∂Pml (cos θ) ∂θ = − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ) cos2 θ − 1 × (− sin θ)
= − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ)
sin θ . (38)
Thus the derivatives of polar angles (θ, φ) with respect to surface normals n = [nx, ny, nz] are
∂θ ∂n = [ ∂θ ∂nx , ∂θ ∂ny , ∂θ ∂nz ] = [ nxnz, nynz, −(n2x + n2y) ] (n2x + n 2 y + n 2 z) √ n2x + n 2 y , (39)
∂φ ∂n = [ ∂φ ∂nx , ∂φ ∂ny , ∂φ ∂nz ] = [ −ny n2x + n 2 y , nx n2x + n 2 y , 0 ] . (40)
In summary, the results of Equation 37, Equation 38, Equation 39, and Equation 40 tell us how to compute ∂Yml /∂Ni. Then the derivative of the pixel j with respect to vertex p which belongs to face i can be computed as
∂Ij ∂Vp ≈ ρj ∂F ∂Ni ∂Ni ∂Vp
= ρj n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml (θ, φ)
∂Ni
∂Ni ∂Vp . (41)
F ADVERSARIAL TRAINING IMPLEMENTATION DETAIL
Our adversarial training is based on the basic idea of injecting adversarial examples into the training set at each step and continuously updating the adversaries according to the current model parameters (Goodfellow et al., 2015; Kurakin et al., 2017). Our experiments inject 100 adversarial lighting examples to the CIFAR-100 data (≈ 0.17% of the training set) and keep updating these adversaries at each epoch.
We compute the adversarial lighting examples using the orange models collected from cgtrader.com and turbosquid.com. We uses five gray-scale background colors with intensities 0.0, 0.25, 0.5, 0.75, 1.0 to mimic images in the CIFAR-100 which contains many pure color backgrounds. Our orthographic cameras are placed at polar angle θ = π/3 with 10 uniformly sampled azimuthal angles ranging from φ = 0 to 2π. Our initial spherical harmonics lighting is the same as
other experiments, using the real-world lighting data provided in (Ramamoorthi & Hanrahan, 2001). Our stepsize for computing adversaries is 0.05 along the direction of lighting gradients. We run our adversarial lighting iterations until fooling the network or reaching the maximum 30 iterations to avoid too extreme lighting conditions, such as turning the lights off.
Our random lighting examples are constructed at each epoch by randomly perturb the lighting coefficients ranging from -0.5 to 0.5.
When training the 16-layers WideResNet (Zagoruyko & Komodakis, 2016) with wide-factor 4, we use batch size 128, learning rate 0.125, dropout rate 0.3, and the standard cross entropy loss. We implement the training using PYTORCH (Paszke et al., 2017), with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4. We train the model for 150 epochs and use the one with best accuracy on the validation set. Figure 16 shows examples of our adversarial lights at different training stages. In the early stages, the model is not robust to different lighting conditions, thus small lighting perturbations are sufficient to fool the model. In the late stages, the network becomes more robust to different lightings. Thus it requires dramatic changes to fool a model or even fail to fool the model within 30 iterations.
G EVALUATE RENDERING QUALITY
We evaluated our rendering quality by whether our rendered images are recognizable by models trained on real photographs. Although large 3D shape datasets, such as ShapeNet (Chang et al., 2015), are available, they do not have have geometries or textures at the resolutions necessary to create realistic renderings. We collected 75 high-quality textured 3D shapes from cgtrader.com and turbosquid.com to evaluate our rendering quality. We augmented the shapes by changing the field of view, backgrounds, and viewing directions, then keep the configurations that were correctly classified by a pre-trained ResNet-101 on ImageNet. Specifically, we place the centroid, calculated as the weighted average of the mesh vertices where the weights are the vertex areas, at the origin and normalize shapes to range -1 to 1; the field of view is chosen to be 2 and 3 in the same unit with the normalized shape; background images include plain colors
and real photos, which have small influence on model predictions; viewing directions are chosen to be 60 degree zenith and uniformly sampled 16 views from 0 to 2π azimuthal angle. In Figure 17, we show that the histogram of model confidence on the correct labels over 10,000 correctly classified rendered images from our differentiable renderer. The confidence is computed using softmax function and the results show that our rendering quality is faithful enough to be recognized by models trained on natural images. | 1. What is the main contribution of the paper regarding adversarial examples?
2. How does the reviewer assess the originality and significance of the proposed approach?
3. What are the strengths of the paper in terms of clarity and background literature?
4. What are the weaknesses of the paper regarding experiments and reproducibility?
5. Are there any suggestions for improving the paper's content or experimental section? | Review | Review
Quality of the paper: The paper is quite clear on the background literature on adversarial examples, physics based rendering, and the core idea of generating adversarial perturbations as a function of illumination and geometric changes.
Originality and Significance: The idea of using differential renderers to produce physically consistent adversarial perturbations is novel.
References: The references in the paper given its scope is fine. It is recommended to explore references to other recent papers that use simulation for performance enhancement in the context of transfer learning, performance characterization (e.g. veerasavarappu et al in arxiv, WACV, CVPR (2015 - 17))
Pros: Good paper , illustrates the utility of differentiable rendering and simulations to generate adversarial examples and to use them for improving robustness.
Cons: The experimental section needs to be extended and the results are limited to simulations on CIFAR-100 and evaluation on lab experimental data. Inclusion of images showing CIFAR-100 images augmented with random lighting, adversarial lighting would have been good. The details of the image generation process for that experiment is vague and not reproducible. |
ICLR | Title
Beyond Pixel Norm-Balls: Parametric Adversaries using an Analytically Differentiable Renderer
Abstract
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
N/A
Many machine learning image classifiers are vulnerable to adversarial attacks, inputs with perturbations designed to intentionally trigger misclassification. Current adversarial methods directly alter pixel colors and evaluate against pixel norm-balls: pixel perturbations smaller than a specified magnitude, according to a measurement norm. This evaluation, however, has limited practical utility since perturbations in the pixel space do not correspond to underlying real-world phenomena of image formation that lead to them and has no security motivation attached. Pixels in natural images are measurements of light that has interacted with the geometry of a physical scene. As such, we propose a novel evaluation measure, parametric normballs, by directly perturbing physical parameters that underly image formation. One enabling contribution we present is a physically-based differentiable renderer that allows us to propagate pixel gradients to the parametric space of lighting and geometry. Our approach enables physically-based adversarial attacks, and our differentiable renderer leverages models from the interactive rendering literature to balance the performance and accuracy trade-offs necessary for a memory-efficient and scalable adversarial data augmentation workflow.
1 INTRODUCTION
Research in adversarial examples continues to contribute to the development of robust (semi-)supervised learning (Miyato et al., 2018), data augmentation (Goodfellow et al., 2015; Sun et al., 2018), and machine learning understanding (Kanbak et al., 2018). One important caveat of the approach pursued by much of the literature in adversarial machine learning, as discussed recently (Goodfellow,
2018; Gilmer et al., 2018), is the reliance on overly simplified attack metrics: namely, the use of pixel value differences between an adversary and an input image, also referred to as the pixel norm-balls.
The pixel norm-balls game considers pixel perturbations of norm-constrained magnitude (Goodfellow et al., 2015), and is used to develop adversarial attackers, defenders and training strategies. The pixel norm-ball game is attractive from a research perspective due to its simplicity and well-posedness: no knowledge of image formation is required and any arbitrary pixel perturbation remains eligible (so long as it is “small”, in the perceptual sense). Although the pixel norm-ball is useful for research purposes, it only captures limited real-world security scenarios.
Despite the ability to devise effective adversarial methods through the direct employment of optimizations using the pixel norm-balls measure, the pixel manipulations they promote are divorced from the types of variations present in the real world, limiting their usefulness “in the wild”. Moreover, this methodology leads to defenders that are only effective when defending against unrealistic images/attacks, not generalizing outside of the space constrained by pixel norm-balls. In order to consider conditions that enable adversarial attacks in the real world, we advocate for a new measurement norm that is rooted in the physical processes that underly realistic image synthesis, moving away from overly simplified metrics, e.g., pixel norm-balls.
Our proposed solution – parametric norm-balls – rely on perturbations of physical parameters of a synthetic image formation model, instead of pixel color perturbations (Figure 2). To achieve this, we use a physically-based differentiable renderer which allows us to perturb the underlying parameters of the image formation process. Since these parameters indirectly control pixel colors, perturbations in this parametric space implicitly span the space of natural images. We will demonstrate two advantages that fall from considering perturbations in this parametric space: (1) they enable adversarial approaches that more readily apply to real-world applications, and (2) they permit the use of much more significant perturbations (compared to pixel norms), without invalidating the realism of the resulting image (Figure 1). We validate that parametric norm-balls game playing is critical for a variety of important adversarial tasks, such as building defenders robust to perturbations that can occur naturally in the real world.
We perform perturbations in the underlying image formation parameter space using a novel physicallybased differentiable renderer. Our renderer analytically computes the derivatives of pixel color with respect to these physical parameters, allowing us to extend traditional pixel norm-balls to physicallyvalid parametric norm-balls. Notably, we demonstrate perturbations on an environment’s lighting and on the shape of the 3D geometry it shades. Our differentiable renderer achieves state-of-the-art performance in speed and scalability (Section 3) and is fast enough for rendered adversarial data augmentation (Section 5): training augmented with adversarial images generated with a renderer.
Existing differentiable renders are slow and do not scalable to the volume of high-quality, highresolutions images needed to make adversarial data augmentation tractable (Section 2). Given our analytically-differentiable renderer (Section 3), we are able to demonstrate the efficacy of parametric space perturbations for generating adversarial examples. These adversaries are based on a substantially different phenomenology than their pixel norm-balls counterparts (Section 4). Ours is among the first steps towards the deployment of rendered adversarial data augmentation in real-world applications: we train a classifier with computer-generated adversarial images, evaluating the performance of the training against real photographs (i.e., captured using cameras; Section 5). We test on real photos to show the parametric adversarial data augmentation increases the classifier’s robustness to “deformations” happened in the real world. Our evaluation differs from the majority of existing literature which evaluates against computer-generated adversarial images, since our parametric space perturbation is no-longer a wholly idealized representation of the image formation model but, instead, modeled against of theory of realistic image generation.
2 RELATED WORK
Our work is built upon the fact that simulated or rendered images can participate in computer vision and machine learning on real-world tasks. Many previous works use rendered (simulated) data to train deep networks, and those networks can be deployed to real-world or even outperform the state-of-the-art networks trained on real photos (Movshovitz-Attias et al., 2016; Chen et al., 2016; Varol et al., 2017; Su et al., 2015; Johnson-Roberson et al., 2017; Veeravasarapu et al., 2017b; Sadeghi & Levine, 2016; James & Johns, 2016). For instance, Veeravasarapu et al. (2017a) show that training with 10% real-world data and 90% simulation data can reach the level of training with full real data. Tremblay et al. (2018) even demonstrate that the network trained on synthetic data yields a better performance than using real data alone. As rendering can cheaply provide a theoretically infinite supply of annotated input data, it can generate data which is orders of magnitude larger than existing datasets. This emerging trend of training on synthetic data provides an exciting direction for future machine learning development. Our work complements these works. We demonstrate the utility of rendering can be used to study the potential danger lurking in misclassification due to subtle changes to geometry and lighting. This provides a future direction of combining with synthetic data generation pipelines to perform physically based adversarial training on synthetic data.
Adversarial Examples Szegedy et al. (2014) expose the vulnerability of modern deep neural nets using purposefully-manipulated images with human-imperceptible misclassification-inducing noise. Goodfellow et al. (2015) introduce a fast method to harness adversarial examples, leading to the idea of pixel norm-balls for evaluating adversarial attackers/defenders. Since then, many significant developments in adversarial techniques have been proposed (Akhtar & Mian, 2018; Szegedy et al., 2014; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018; Papernot et al., 2017; Moosavi-Dezfooli et al., 2017; Chen et al., 2017; Su et al., 2017). Our work extends this progression in constructing adversarial examples, a problem that lies at the foundation of adversarial machine learning. Kurakin et al. (2016) study the transferability of attacks to the physical world by printing then photographing adversarial images. Athalye et al. (2017) and Eykholt et al. (2018) propose extensions to non-planar (yet, still fixed) geometry and multiple viewing angles. These works still rely fundamentally on the direct pixel or texture manipulation on physical objects. Since these methods assume independence between pixels in the image or texture space they remain variants of pixel norm-balls. This leads to unrealistic attack images that cannot model real-world scenarios (Goodfellow, 2018; Hendrycks & Dietterich, 2018; Gilmer et al., 2018). Zeng et al. (2017) generate adversarial examples by altering physical parameters using a rendering network (Liu et al., 2017) trained to approximate the physics of realistic image formation. This data-driven approach leads to an image formation model biased towards the rendering style present in the training data. This method also relies on differentiation through the rendering network in order to compute adversaries, which requires high-quality training on a large amount of data. Even with perfect training, in their reported performance, it still requires 12 minutes on average to find new adversaries, we only take a few seconds Section 4.1. Our approach is based on a differentiable physically-based renderer that directly (and, so, more convincingly) models the image formation process, allowing us to alter physical parameters – like geometry and lighting – and compute
derivatives (and adversarial examples) much more rapidly compared to the (Zeng et al., 2017). We summarize the difference between our approach and the previous non-image adversarial attacks in Table 1.
Differentiable Renderer Applying parametric norm-balls requires that we differentiate the image formation model with respect to the physical parameters of the image formation model. Modern realistic computer graphics models do not expose facilities to directly accommodate the computation of derivatives or automatic differentiation of pixel colors with respect to geometry and lighting variables. A physically-based
differentiable renderer is fundamental to computing derivative of pixel colors with respect to scene parameters and can benefit machine learning in several ways, including promoting the development of novel network architectures (Liu et al., 2017), in computing adversarial examples (Athalye et al., 2017; Zeng et al., 2017), and in generalizing neural style transfer to a 3D context (Kato et al., 2018; Liu et al., 2018). Recently, various techniques have been proposed to obtain these derivatives: Wu et al. (2017); Liu et al. (2017); Eslami et al. (2016) use neural networks to learn the image formation process provided a large amount of input/output pairs. This introduces unnecessary bias in favor of the training data distribution, leading to inaccurate derivatives due to imperfect learning. Kato et al. (2018) propose a differentiable renderer based on a simplified image formation model and an underlying linear approximation. Their approach requires no training and is unbiased, but their approximation of the image formation and the derivatives introduce more errors. Loper & Black (2014); Genova et al. (2018) use automatic differentiation to build fully differentiable renderers. These renderers, however, are expensive to evaluate, requiring orders of magnitude more computation and much larger memory footprints compared to our method.
Our novel differentiable renderer overcomes these limitations by efficiently computing analytical derivatives of a physically-based image formation model. The key idea is that the non-differentiable visibility change can be ignored when considering infinitesimal perturbations. We model image variations by changing geometry and realistic lighting conditions in an analytically differentiable manner, relying on an accurate model of diffuse image formation that extend spherical harmonicsbased shading methods (Appendix C). Our analytic derivatives are efficient to evaluate, have scalable memory consumption, are unbiased, and are accurate by construction (Table 2). Our renderer explicitly models the physics of the image formation processes, and so the images it generates are realistic enough to illicit correct classifications from networks trained on real-world photographs.
3 ADVERSARIAL ATTACKS IN PARAMETRIC SPACES
Adversarial attacks based on pixel norm-balls typically generate adversarial examples by defining a cost function over the space of images C : I → R that enforces some intuition of what failure should look like, typically using variants of gradient descent where the gradient ∂C/∂I is accessible by differentiating through networks (Szegedy et al., 2014; Goodfellow et al., 2015; Rozsa et al., 2016; Kurakin et al., 2017; Moosavi Dezfooli et al., 2016; Dong et al., 2018).
The choices for C include increasing the cross-entropy loss of the correct class (Goodfellow et al., 2015), decreasing the cross-entropy loss of the least-likely class (Kurakin et al., 2017), using a combination of cross-entropies (Moosavi Dezfooli et al., 2016), and more (Szegedy et al., 2014; Rozsa et al., 2016; Dong et al., 2018; Tramèr et al., 2017). We combine of cross-entropies to provide flexibility for choosing untargeted and targeted attacks by specifying a different set of labels:
C ( I(U, V ) ) = −CrossEntropy ( f(I(U, V )), Ld ) + CrossEntropy ( f(I(U, V )), Li ) , (1)
where I is the image, f(I) is the output of the classifier, Ld, Li are labels which a user wants to decrease and increase the predicted confidences respectively. In our experiments, Ld is the correct class and Li is either ignored or chosen according to user preference. Our adversarial attacks in the parametric space consider an image I(U, V ) is the function of physical parameters of the image formation model, including the lighting U and the geometry V . Adversarial examples constructed by perturbing physical parameters can then be computed via the chain rule
∂C ∂U = ∂C ∂I ∂I ∂U
∂C ∂V = ∂C ∂I ∂U ∂V , (2)
where ∂I/∂U, ∂I/∂V are derivatives with respect to the physical parameters and we evaluate using our physically based differentiable renderer. In our experiments, we use gradient descent for finding parametric adversarial examples where the gradient is the direction of ∂I/∂U, ∂I/∂V .
3.1 PHYSICALLY BASED DIFFERENTIABLE RENDERER
Rendering is the process of generating a 2D image from a 3D scene by simulating the physics of light. Light sources in the scene emit photons that then interact with objects in the scene. At each interaction, photons are either reflected, transmitted or absorbed, changing trajectory and repeating until arriving at a sensor such as a camera. A physically based renderer models the interactions mathematically (Pharr et al., 2016), and our task is to analytically differentiate the physical process.
We develop our differentiable renderer with common assumptions in real-time rendering (AkenineMoller et al., 2008) – diffuse material, local illumination, and distant light sources. Our diffuse material assumption considers materials which reflect lights uniformly for all directions, equivalent to considering non-specular objects. We assume that variations in the material (texture) are piece-wise constant with respect to our triangle mesh discretization. The local illumination assumption only considers lights that bounce directly from the light source to the camera. Lastly, we assume light sources are far away from the scene, allowing us to represent lighting with one spherical function. For a more detailed rationale of our assumptions, we refer readers to Appendix B).
These assumptions simplify the complicated integral required for rendering (Kajiya, 1986) and allow us to represent lighting in terms of spherical harmonics, an orthonormal basis for spherical functions analogous to Fourier transformation. Thus, we can analytically differentiate the rendering equation to acquire derivatives with respect to lighting, geometry, and texture (derivations found in Appendix C).
Using analytical derivatives avoids pitfalls of previous differentiable renderers (see Section 2) and make our differentiable renderer orders of magnitude faster than the previous fully differentiable renderer OPENDR (Loper & Black, 2014) (see Figure 3). Our approach is scalable to handle problems with more than
100,000 variables, while OPENDR runs out of memory for problems with more than 3,500 variables.
3.2 ADVERSARIAL LIGHTING AND GEOMETRY
Adversarial lighting denotes adversarial examples generated by changing the spherical harmonics lighting coefficients U (Green, 2003). As our differentiable renderer allows us to compute ∂I/∂U analytically (derivation is provided in Appendix C.4), we can simply apply the chain rule:
U ← U − γ ∂C ∂I ∂I ∂U , (3)
where ∂C/∂I is the derivative of the cost function with respect to pixel colors and can be obtained by differentiating through the network. Spherical harmonics act as an implicit constraint to prevent unrealistic lighting because natural lighting environments everyday life are dominated by lowfrequency signals. For instance, rendering of diffuse materials can be approximated with only 1% pixel intensity error by the first 2 orders of spherical harmonics (Ramamoorthi & Hanrahan, 2001). As computers can only represent a finite number of coefficients, using spherical harmonics for lighting implicitly filters out high-frequency, unrealistic lightings. Thus, perturbing the parametric space of spherical harmonics lighting gives us more realistic compared to image-pixel perturbations Figure 1.
Adversarial geometry is an adversarial example computed by changes the position of the shape’s surface. The shape is encoded as a triangle mesh with |V | vertices and |F | faces, surface points are vertex positions V ∈ R|V |×3 which determine per-face normals N ∈ R|F |×3 which in turn determine the shading of the surface. We can compute adversarial shapes by applying the chain rule:
V ← V − γ ∂C ∂I ∂I ∂N ∂N ∂V , (4)
ni hij
vj where ∂I/∂N is computed via a derivation in Appendix E. Each triangle only has one normal on its face, making ∂N/∂V computable analytically. In particular, the 3× 3 Jacobian of a unit face normal vector ni ∈ R3 of the jth face of the triangle mesh V with respect to one of its corner vertices vj ∈ R3 is
∂ni ∂vj = hijn
T i
‖hij‖2 ,
where hij ∈ R3 is the height vector: the shortest vector to the corner vj from the opposite edge.
4 RESULTS AND EVALUATION
We have described how to compute adversarial examples by parametric perturbations, including lighting and geometry. In this section, we show that adversarial examples exist in the parametric spaces, then we analyze the characteristics of those adversaries and parametric norm-balls.
We use 49× 3 spherical harmonics coefficients to represent environment lighting, with an initial realworld lighting condition (Ramamoorthi & Hanrahan, 2001). Camera parameters and the background images are empirically chosen to have correct initial classifications and avoid synonym sets. In Figure 4 we show that single-view adversarial lighting attack can fool the classifier (pre-trained ResNet-101 on ImageNet (He et al., 2016)). Figure 5 shows multi-view adversarial lighting, which optimizes the summation of the cost functions for each view, thus the gradient is computed as the summation over all camera views:
U ← U − ∑
i∈cameras γ ∂C ∂Ii ∂Ii ∂U . (5)
If one is interested in a more specific subspace, such as outdoor lighting conditions governed by sunlight and weather, our adversarial lighting can adapt to it. In Figure 7, we compute adversarial lights over the space of skylights by applying one more chain rule to the Preetham skylight parameters (Preetham et al., 1999; Habel et al., 2008). Details about taking these derivatives are provided in Appendix D. Although adversarial skylight exists, its low degrees of freedom (only three parameters) makes it more difficult to find adversaries.
In Figure 8 and Figure 9 we show the existence of adversarial
geometry in both single-view and multi-view cases. Note that we upsample meshes to have >10K vertices as a preprocessing step to increase the degrees of freedom available for perturbations. Multiview adversarial geometry enables us to perturb the same 3D shape from different viewing directions, which enables us to construct a deep optical illusion: The same 3D shape are classified differently from different angles. To create the optical illusion in Figure 6, we only need to specify the Li in Equation (1) to be a dog and a cat for two different views.
4.1 PROPERTIES OF PARAMETRIC NORM-BALLS AND ADVERSARIES
To further understand parametric adversaries, we analyze how do parametric adversarial examples generalize to black-box models. In Table 3, we test 5,000 ResNet parametric adversaries on unseen networks including AlexNet (Krizhevsky et al., 2012), DenseNet (Huang et al., 2017), SqueezeNet (Iandola et al., 2016), and VGG (Simonyan & Zisserman, 2014). Our result shows that parametric adversarial examples also share across models.
In addition to different models, we evaluate parametric adversaries on black-box viewing directions. This evaluation mimics the real-world scenario that a self-driving car would “see” a stop sign from different angles while driving. In Table 4, we randomly sample 500 correctly classified views for a given shape and perform adversarial lighting and geometry algorithms only on a subset of views, then evaluate the resulting adversarial lights/shapes on all the views. The results show that adversarial lights are more generalizable to fool unseen views; adversarial shapes, yet, are less generalizable.
Switching from pixel norm-balls to parametric norm-balls only requires to change the normconstraint from the pixel color space to the parametric space. For instance, we can perform a quantitative comparison between parametric adversarial and random perturbations in Figure 10. We use L∞-norm = 0.1 to constraint the perturbed magnitude of each lighting coefficient, and L∞-norm = 0.002 to constrain the maximum displacement of surface points along each axis. The results show how many parametric adversaries can fool the classifier out of 10,000 adversarial lights and shapes respectively. Not only do the
parametric norm-balls show the effectiveness of adversarial perturbation, evaluating robustness using parametric norm-balls has real-world implications.
Table 3: We evaluate ResNet adversaries on unseen models and show that parametric adversarial examples also share across models. The table shows the success rate of attacks (%).
Alex VGG Squeeze Dense
Lighting 81.2% 65.0% 78.6% 43.5% Geometry 70.3% 58.9% 71.1% 40.1%
Table 4: We compute parametric adversaries using a subset of views (#Views) and evaluate the success rates (%) of attacks on unseen views.
#Views 0 1 5
Lighting 0.0% 29.4% 64.2% Geometry 0.0% 0.6% 3.6%
Time (sec.) Adversarial Lighting
105 106
54K vertices 25K vertices 15K vertices
105 106
54K vertices 25K vertices 15K vertices 1 .1
.01
Adversarial Geometry
#pixel Runtime The inset presents our runtime per iteration for computing derivatives. An adversary normally requires less than 10 iterations, thus takes a few seconds. We evaluate our CPU PYTHON implementation and the OPENGL rendering, on an Intel Xeon 3.5GHz CPU with 64GB of RAM and an NVIDIA GeForce GTX 1080. Our runtime depends on the number of pixels requiring derivatives.
5 RENDERED ADVERSARIAL DATA AUGMENTATION AGAINST REAL PHOTOS
We inject adversarial examples, generated using our differentiable renderer, into the training process of modern image classifiers. Our goal is to increase the robustness of these classifiers to real-world perturbations. Traditionally, adversarial training is evaluated against computer-generated adversarial images (Kurakin et al., 2017; Madry et al., 2018; Tramèr et al., 2017). In contrast, our evaluation differs from the majority of the literature, as we evaluate performance against real photos (i.e., images captured using a camera), and not computer-generated images. This evaluation method is motivated by our goal of increasing a classifier’s robustness to “perturbations” that occur in the real world and result from the physical processes underlying real-world image formation. We present preliminary steps towards this objective, resolving the lack of realism of pixel norm-balls and evaluating our augmented classifiers (i.e., those trained using our rendered adversaries) against real photographs.
Training We train the WideResNet (16 layers, 4 wide factor) (Zagoruyko & Komodakis, 2016) on CIFAR-100 (Krizhevsky & Hinton, 2009) augmented with adversarial lighting examples. We apply a common adversarial training method that adds a fixed number of adversarial examples each epoch (Goodfellow et al., 2015; Kurakin et al., 2017). We refer readers to Appendix F for the training detail. In our experiments, we compare three training scenarios: (1) CIFAR-100, (2) CIFAR-100 + 100 images under random lighting, and (3) CIFAR-100 + 100 images under adversarial lighting. Comparing to the accuracy reported in (Zagoruyko & Komodakis, 2016), WideResNets trained on these three cases all have comparable performance (≈ 77%) on the CIFAR-100 test set.
Testing We create a test set of real photos, captured in a laboratory setting with controlled lighting and camera parameters: we photographed oranges using a calibrated Prosilica GT 1920 camera under different lighting conditions, each generated by projecting different lighting patterns using an LG PH550 projector. This hardware lighting setup projects lighting patterns from a fixed solid angle of directions onto the scene objects. Figure 11 illustrates samples from the 500 real photographs of our dataset. We evaluate the robustness of our classifier models according to test accuracy. Of note, average prediction accuracies over five trained WideResNets on our test data under the three training cases are (1) 4.6%, (2) 40.4%, and (3) 65.8%. This result
supports the fact that training on rendered images can improve the networks’ performance on real photographs. Our preliminary experiments motivate the potential of relying on rendered adversarial training to increase the robustness to visual phenomena present in the real-world inputs.
6 LIMITATIONS & FUTURE WORK
Using parametric norm-balls to remove the lack of realism of pixel norm-balls is only the first step to bring adversarial machine learning to real-world. More evaluations beyond the lab experimental data could uncover the potential of the rendered adversarial data augmentation. Coupling the differentiable renderer with methods for reconstructing 3D scenes, such as (Veeravasarapu et al., 2017b; Tremblay et al., 2018), has the potential to develop a complete pipeline for rendered adversarial training. We can take a small set of real images, constructing 3D virtual scenes which have real image statistics, using our approach to manipulate the predicted parameters to construct the parametric adversarial
examples, then perform rendered adversarial training. This direction has the potential to produce limitless simulated adversarial data augmentation for real-world tasks.
Our differentiable renderer models the change of realistic environment lighting and geometry. Incorporating real-time rendering techniques from the graphics community could further improve the quality of rendering. Removing the locally constant texture assumption could improve our results. Extending the derivative computation to materials could enable “adversarial materials”. Incorporating derivatives of the visibility change and propagating gradient information to shape skeleton could also create “adversarial poses”. These extensions offer a set of tools for modeling real security scenarios. For instance, we can train a self-driving car classifier that can robustly recognize pedestrians under different poses, lightings, and cloth deformations.
ACKNOWLEDGMENTS
This work is funded in part by NSERC Discovery Grants (RGPIN–2017–05235 & RGPAS–2017–507938), Connaught Funds (NR2016–17), the Canada Research Chairs Program, the Fields Institute, and gifts by Adobe Systems Inc., Autodesk Inc., MESH Inc. We thank members of Dynamic Graphics Project for feedback and draft reviews; Wenzheng Chen for photography equipments; Colin Raffel and David Duvenaud for discussions and feedback.
Supplementary Material
A COMPARISON BETWEEN PERTURBATION SPACES
We extend our comparisons against pixel norm-balls methods (Figure 1) by visualizing the results and the generated perturbations (Figure 12). We hope this figure elucidates that our parametric perturbation are more realistic several scales of perturbations.
B PHYSICALLY BASED RENDERING
Physically based rendering (PBR) seeks to model the flow of light, typically the assumption that there exists a collection of light sources that generate light; a camera that receives this light; and a scene that modulates the flow light between the light sources and camera (Pharr et al., 2016). What follows is a brief discussion of the general task of rendering an image from a scene description and the approximations we take in order to make our renderer efficient yet differentiable.
Computer graphics has dedicated decades of effort into developing methods and technologies to enable PBR to synthesize of photorealistic images under a large gamut of performance
requirements. Much of this work is focused around taking approximations of the cherished Rendering equation (Kajiya, 1986), which describes the propagation of light through a point in space. If we let uo be the output radiance, p be the point in space, ωo be the output direction, ue be the emitted radiance, ui be incoming radiance, ωi be the incoming angle, fr be the way light be reflected off the material at that given point in space we have:
uo(p, ωo) = ue(p, ωo) + ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi.
From now on we will ignore the emission term ue as it is not pertinent to our discussion. Furthermore, because the speed of light is substantially faster than the exposure time of our eyes, what we perceive is not the propagation of light at an instant, but the steady state solution to the rendering equation evaluated at every point in space. Explicitly computing this steady state is intractable for our applications and will mainly serve as a reference for which to place a plethora of assumptions and simplifications we will make for the sake of tractability. Many of these methods focus on ignoring light with nominal effects on the final rendered image vis a vis assumptions on the way light travels. For instance, light is usually assumed to have nominal interacts with air, which is described as the assumption that the space between objects is a vacuum, which constrains the interactions of light to the objects in a scene. Another common assumption is that light does not penetrate objects, which makes it difficult to render objects like milk and human skin1. This constrains the complexity of light propagation to the behavior of light bouncing off of object surfaces.
B.1 LOCAL ILLUMINATION
It is common to see assumptions that limit number of bounces light is allowed.In our case we chose to assume that the steady state is sufficiently approximated by an extremely low number of iterations: one. This means that it seems sufficient to model the lighting of a point in space by the light sent to it directly by light sources. Working with such a strong simplification does, of course, lead to a few artifacts. For instance, light occluded by other objects is ignored so shadows disappear and auxiliary techniques are usually employed to evaluate shadows (Williams, 1978; Miller, 1994).
When this assumption is coupled with a camera we approach what is used in standard rasterization systems such as OPENGL (Shreiner & Group, 2009), which is what we use. These systems compute the illumination of a single pixel by determining the fragment of an object visible through that pixel and only computing the light that traverses directly from the light sources, through that fragment, to that pixel. The lighting of a fragment is therefore determined by a point and the surface normal at that point, so we write the fragment’s radiance as R(p,n, ωo) = uo(p, ωo):
R(p,n, ωo) = ∫ S2 fr(p, ωi, ωo)ui(p, ωi)(ωi · n)dωi. (6)
B.2 LAMBERTIAN MATERIAL
Lambertian Non-Lambertian
Figure 15: We consider the Lambertian material (left) where lights get reflected uniformly in every direction.
Each point on an object has a model approximating the transfer of incoming light to a given output direction fr, which is usually called the material. On a single object the material parameters may vary quite a bit and the correspondence between points and material parameters is usually called the texture map which forms the texture of an object. There exists a wide gamut of material models, from mirror materials that transport light from a single input direction to a single output direction, to materials that reflect light evenly in all directions, to materials liked brushed metal that reflect differently along different angles. For the sake
of document we only consider diffuse materials, also called Lambertian materials, where we assume
1this is why simple renderers make these sorts of objects look like plastic
that incoming light is reflected uniformly, i.e fr is a constant function with respect to angle, which we denote fr(p, ωi, ωo) = ρ(p):
R(p,n) = ρ(p) ∫ Ω(n) u(p, ω)(ω · n)dω. (7)
This function ρ is usually called the albedo, which can be perceived as color on the surface for diffuse material, and we reduce our integration domain to the upper hemisphere Ω(n) in order to model light not bouncing through objects. Furthermore, since only the only ω and u are the incoming ones we can now suppress the “incoming” in our notation and just use ω and u respectively.
B.3 ENVIRONMENT MAPPING
The illumination of static, distant objects such as the ground, the sky, or mountains do not change in any noticeable fashion when objects in a scene are moved around, so u can be written entirely in terms of ω, u(p, ω) = u(ω). If their illumination forms a constant it seems prudent to pre-compute or cache their contributions to the illumination of a scene. This is what is usually called environment mapping and they fit in the rendering equation as a representation for the total lighting of a scene, i.e the total incoming radiance ui. Because the environment is distant, it is common to also assume that the position of the object receiving light from an environment map does not matter so this simplifies ui to be independent of position:
R(p,n) = ρ(p) ∫ Ω(n) u(ω) (ω · n) dω. (8)
B.4 SPHERICAL HARMONICS
Despite all of our simplifications, the inner integral is still a fairly generic function over S2. Many techniques for numerically integrating the rendering equation have emerged in the graphics community and we choose one which enables us to perform pre-computation and select a desired spectral accuracy: spherical harmonics. Spherical harmonics are a basis on S2 so, given a spherical harmonics expansion of the integrand, the evaluation of the above integral can be reduced to a weighted product of coefficients. This particular basis is chosen because it acts as a sort of Fourier basis for functions on the sphere and so the bases are each associated with a frequency, which leads to a convenient multi-resolution structure. In fact, the rendering of diffuse objects under distant lighting can be 99% approximated by just the first few spherical harmonics bases (Ramamoorthi & Hanrahan, 2001).
We will only need to note that the spherical harmonics bases Y ml are denoted with the subscript with l as the frequency and that there are 2l + 1 functions per frequency, denoted by superscripts m between −l to l inclusively. For further details on them please take a glance at Appendix C. If we approximate a function f in terms of spherical harmonics coefficients f ≈ ∑ lm fl,mY m l the integral can be precomputed as∫ S2 f ≈ ∫ S2 ∑ lm fl,mY m l = ∑ lm fl,m ∫ S2 Y ml , (9)
Thus we have defined a reduced rendering equation that can be efficiently evaluated using OPENGL while maintaining differentiability with respect to lighting and vertices. In the following appendix we will derive the derivatives necessary to implement our system.
C DIFFERENTIABLE RENDERER
Rendering computes an image of a 3D shape given lighting conditions and the prescribed material properties on the surface of the shape. Our differentiable renderer assumes Lambertian reflectance, distant light sources, local illumination, and piece-wise constant textures. We will discuss how to explicitly compute the derivatives used in the main body of this text. Here we give a detailed discussion about spherical harmonics and their advantages.
C.1 SPHERICAL HARMONICS
Spherical harmonics are usually defined in terms of the Legendre polynomials, which are a class of orthogonal polynomials defined by the recurrence relation
P0 = 1 (10) P1 = x (11)
(l + 1)Pl+1(x) = (2l + 1)xPl(x)− lPl−1(x). (12)
The associated Legendre polynomials are a generalization of the Legendre polynomials and can be fully defined by the relations
P 0l = Pl (13) (l −m+ 1)Pml+1(x) = (2l + 1)xPml (x)− (l +m)Pml−1(x) (14)
2mxPml (x) = − √ 1− x2 [ Pm+1l (x) + (l +m)(l −m+ 1)P m−1 l (x) ] . (15)
Using the associated Legendre polynomials Pml we can define the spherical harmonics basis as
Y ml (θ, φ) = K m l (−1)m √ 2P−ml (cos θ) sin(−mφ) m < 0 (−1)m √ 2Pml (cos θ) cos(mφ) m > 0
P 0l (cos θ) m = 0
. (16)
where Kml =
√ (2l + 1)(l − |m|)!
4π(l + |m|)! . (17)
We will use the fact that the associated Legendre polynomials correspond to the spherical harmonics bases that are rotationally symmetric along the z axis (m = 0).
In order to incorporate spherical harmonics into Equation 8, we change the integral domain from the upper hemisphere Ω(n) back to S2 via a max operation
R(p,n) = ρ(p) ∫ Ω(n) u(ω)(ω · n)dω (18)
= ρ(p) ∫ S2 u(ω) max(ω · n, 0)dω. (19)
We see that the integral is comprised of two components: a lighting component u(ω) and a component that depends on the normal max(ω · n, 0). The strategy is to pre-compute the two components by projecting onto spherical harmonics, and evaluating the integral via a dot product at runtime, as we will now derive.
C.2 LIGHTING IN SPHERICAL HARMONICS
Approximating the lighting component u(ω) in Equation 19 using spherical harmonics Y ml up to band n can be written as
u(ω) ≈ n∑ l=0 l∑ m=−l Ul,mY m l (ω),
where Ul,m ∈ R are coefficients. By using the orthogonality of spherical harmonics we can use evaluate these coefficients as an integral between u(ω) and Y ml (ω)
Ul,m = 〈u, Y ml 〉S2 = ∫ S2 u(ω)Y ml (ω)dω,
which can be evaluated via quadrature.
C.3 CLAMPED COSINE IN SPHERICAL HARMONICS
So far, we have projected the lighting term u(ω) onto the spherical harmonics basis. To complete evaluating Equation 19 we also need to approximate the second component max(ω ·n, 0) in spherical
harmonics. This is the so-called the clamped cosine function.
g(ω,n) = max(ω · n, 0) = n∑ l=0 l∑ m=−l Gl,m(n)Y m l (ω),
where Gl,m(n) ∈ R can be computed by projecting g(ω,n) onto Y ml (ω)
Gl,m(n) = ∫ S2 max(ω · n, 0)Y ml (ω)dω.
Unfortunately, this formulation turns out to be tricky to compute. Instead, the common practice is to analytically compute the coefficients for unit z direction G̃l,m = Gl,m(nz) = Gl,m([0, 0, 1]ᵀ) and evaluate the coefficients for different normals Gl,m(n) by rotating G̃l,m. This rotation, G̃l,m, can be computed analytically:
G̃l,m = ∫ S2 max(ω · nz, 0)Y ml (ω)dω
= ∫ 2π 0 ∫ π 0 max([sin θ cosφ, sin θ sinφ, cos θ][0, 0, 1]ᵀ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π 0 max(cos θ, 0)Y ml (θ, φ) sin θdθdφ
= ∫ 2π 0 ∫ π/2 0 cos θ Y ml (θ, φ) sin θdθdφ. (20)
In fact, because max(ω ·nz, 0) is rotationally symmetric around the z-axis, its projection onto Y ml (ω) will have many zeros except the rotationally symmetric spherical harmonics Y 0l . In other words, G̃l,m is non-zero only when m = 0. So we can simplify Equation 20 to
G̃l = G̃l,0 = 2π ∫ π/2 0 cos θ Y 0l (θ) sin θdθ.
The evaluation of this integral can be found in Appendix A in (Basri & Jacobs, 2003). We provide this here as well:
G̃l = √ π 2 l = 0√ π 3 l = 1 (−1) l2 +1 (l−2)! √ (2l+1)π 2l( l2−1)!( l 2 +1)!
l ≥ 2, even 0 l ≥ 2, odd
.
The spherical harmonics coefficients Gl,m(n) of the clamped cosine function g(ω,n) can be computed by rotating G̃l (Sloan et al., 2005) using this formula
Gl,m(n) =
√ 4π
2l + 1 G̃l Y
m l (n). (21)
So far we have projected the two terms in Equation 19 into the spherical harmonics basis. Orthogonality of spherical harmonics makes the evaluation of this integral straightforward:∫
S2 u(ω) max(ω · n, 0)dω = ∫ S2 [∑ l,m Ul,mY m l (ω) ][∑ j,k Gj,k(n)Y k j (ω) ] dω
= ∑ j,k,l,m Ul,mGj,k(n)δ l jδ m k (22)
= ∑ l,m Ul,mGl,m(n). (23)
This, in conjunction with Equation 21allows us to derive the rendering equation using spherical harmonics lighting for Lambertian objects:
R(p,n) = ρ(p) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (n). (24)
So far we have only considered the shading of a specific point p with surface normal n. If we consider the rendered image I given a shape V , lighting U , and camera parameters η, the image I is the evaluation of the rendering equation R of each point in V visible through each pixel in the image. This pixel to point mapping is determined by η. Therefore, we can write I as
I(V,U, η) = ρ(V, η) n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y m l (N(V ))︸ ︷︷ ︸
F (V,U)
, (25)
where N(V ) is the surface normal. We exploit the notation and use ρ(V, η) to represent the texture of V mapped to the image space through η.
C.4 LIGHTING AND TEXTURE DERIVATIVES
For our applications we must differentiate Equation 25 with respect to lighting and material parameters. The derivative with respect to the lighting coefficients U can be obtained by
∂I ∂U = ∂ρ ∂U F + ρ ∂F ∂U (26)
= 0 + ρ n∑ l=0 l∑ m=−l ∂F ∂Ul,m . (27)
This is the Jacobian matrix that maps from spherical harmonics coefficients to pixels. The term ∂F/∂Ul,m can then be computed as
∂F
∂Ul,m =
√ 4π
2l + 1 G̃l Y
m l (N(V )). (28)
The derivative with respect to texture is defined by
∂I ∂ρ = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l Y
m l (N(V )). (29)
Note that we assume texture variations are piece-wise constant with respect to our triangle mesh discretization.
D DIFFERENTIATING SKYLIGHT PARAMETERS
To model possible outdoor daylight conditions, we use the analytical Preetham skylight model (Preetham et al., 1999). This model is calibrated by atmospheric data and parameterized by two intuitive parameters: turbidity τ , which describes the cloudiness of the atmosphere, and two polar angles θs ∈ [0, π/2], φs ∈ [0, 2π], which are encode the direction of the sun. Note that θs, φs are not the polar angles θ, φ for representing incoming light direction ω in u(ω). The spherical harmonics representation of the Preetham skylight is presented in (Habel et al., 2008) as
u(ω) = 6∑ l=0 l∑ m=−l Ul,m(θs, φs, τ)Y m l (ω).
This is derived by first performing a non-linear least squares fit to write Ul,m as a polynomial of θs and τ which lets them solve for Ũl,m(θs, τ) = Ul,m(θs, 0, τ)
Ũl,m(θs, τ) = 13∑ i=0 7∑ j=0 (pl,m)i,jθ i sτ j ,
where (pl,m)i,j are scalar coefficients, then Ul,m(θs, φs, τ) can be computed by applying a spherical harmonics rotation with φs using
Ul,m(θs, φs, τ) = Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs).
We refer the reader to (Preetham et al., 1999) for more detail. For the purposes of this article we just need the above form to compute the derivatives.
D.1 DERIVATIVES
The derivatives of the lighting with respect to the skylight parameters (θs, φs, τ) are
∂Ul,m(θs, φs, τ)
∂φs = −mŨl,m(θs, τ) sin(mφs) +mŨl,−m(θs, τ) cos(mφs) (30)
∂Ul,m(θs, φs, τ) ∂θs = ∂Ũl,m(θs, τ) cos(mφs) + Ũl,−m(θs, τ) sin(mφs) ∂θs (31)
= ∑ ij iθi−1s τ j(pl,m)i,j cos(mφs) + ∑ ij iθi−1s (pl,−m)i,j sin(mφs) (32)
∂Ul,m(θs, φs, τ) ∂τ = ∑ ij jθisτ j−1(pl,m)i,j cos(mφs) + ∑ ij jθisτ j−1(pl,−m)i,j sin(mφs) (33)
E DERIVATIVES OF SURFACE NORMALS
Taking the derivative of the rendered image I with respect to surface normals N is an essential task for computing the derivative of I with respect to the geometry V . Specifically, the derivative of the rendering equation Equation 25 with respect to V is
∂I ∂V = ∂ρ ∂V F + ρ ∂F ∂V (34)
= ∂ρ
∂V F + ρ
∂F
∂N
∂N ∂V (35)
We assume the texture variations are piece-wise constant with respect to our triangle mesh discretization and omit the first term ∂ρ/∂V as the magnitude is zero. Computing ∂N/∂V is provided in Section 3.2. Computing ∂F/∂Ni on face i is
∂F ∂Ni = n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml ∂Ni , (36)
where the ∂Yml /∂Ni is the derivative of the spherical harmonics with respect to the face normal Ni.
To begin this derivation recall the relationship between a unit normal vector n = (nx, ny, nz) and its corresponding polar angles θ, φ
θ = cos−1 (
nz√ n2x + n 2 y + n 2 z
) φ = tan−1 ( ny nx ) ,
we can compute the derivative of spherical harmonics with respect to the normal vector through
∂Y ml (θ, φ)
∂n
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ) + P−ml (cos θ) ∂ sin(−mφ) ∂φ ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ) + Pml (cos θ) ∂ cos(mφ) ∂φ ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
= Kml (−1)m √ 2 [ ∂P−ml (cos θ) ∂θ ∂θ ∂n sin(−mφ)−mP−ml (cos θ) cos(−mφ) ∂φ ∂n ] m < 0 (−1)m √ 2 [ ∂Pml (cos θ) ∂θ ∂θ ∂n cos(mφ)−mPml (cos θ) sin(mφ) ∂φ ∂n ] m > 0 ∂P 0l (cos θ)
∂θ
∂θ ∂n m = 0
(37)
Note that the derivative of the associated Legendre polynomials Pml (cos θ) can be computed by applying the recurrence formula Dunster (2010)
∂Pml (cos θ) ∂θ = − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ) cos2 θ − 1 × (− sin θ)
= − cos θ(l + 1)Pml (cos θ) + (l −m+ 1)Pml+1(cos θ)
sin θ . (38)
Thus the derivatives of polar angles (θ, φ) with respect to surface normals n = [nx, ny, nz] are
∂θ ∂n = [ ∂θ ∂nx , ∂θ ∂ny , ∂θ ∂nz ] = [ nxnz, nynz, −(n2x + n2y) ] (n2x + n 2 y + n 2 z) √ n2x + n 2 y , (39)
∂φ ∂n = [ ∂φ ∂nx , ∂φ ∂ny , ∂φ ∂nz ] = [ −ny n2x + n 2 y , nx n2x + n 2 y , 0 ] . (40)
In summary, the results of Equation 37, Equation 38, Equation 39, and Equation 40 tell us how to compute ∂Yml /∂Ni. Then the derivative of the pixel j with respect to vertex p which belongs to face i can be computed as
∂Ij ∂Vp ≈ ρj ∂F ∂Ni ∂Ni ∂Vp
= ρj n∑ l=0 l∑ m=−l Ul,m
√ 4π
2l + 1 G̃l ∂Y ml (θ, φ)
∂Ni
∂Ni ∂Vp . (41)
F ADVERSARIAL TRAINING IMPLEMENTATION DETAIL
Our adversarial training is based on the basic idea of injecting adversarial examples into the training set at each step and continuously updating the adversaries according to the current model parameters (Goodfellow et al., 2015; Kurakin et al., 2017). Our experiments inject 100 adversarial lighting examples to the CIFAR-100 data (≈ 0.17% of the training set) and keep updating these adversaries at each epoch.
We compute the adversarial lighting examples using the orange models collected from cgtrader.com and turbosquid.com. We uses five gray-scale background colors with intensities 0.0, 0.25, 0.5, 0.75, 1.0 to mimic images in the CIFAR-100 which contains many pure color backgrounds. Our orthographic cameras are placed at polar angle θ = π/3 with 10 uniformly sampled azimuthal angles ranging from φ = 0 to 2π. Our initial spherical harmonics lighting is the same as
other experiments, using the real-world lighting data provided in (Ramamoorthi & Hanrahan, 2001). Our stepsize for computing adversaries is 0.05 along the direction of lighting gradients. We run our adversarial lighting iterations until fooling the network or reaching the maximum 30 iterations to avoid too extreme lighting conditions, such as turning the lights off.
Our random lighting examples are constructed at each epoch by randomly perturb the lighting coefficients ranging from -0.5 to 0.5.
When training the 16-layers WideResNet (Zagoruyko & Komodakis, 2016) with wide-factor 4, we use batch size 128, learning rate 0.125, dropout rate 0.3, and the standard cross entropy loss. We implement the training using PYTORCH (Paszke et al., 2017), with the SGD optimizer and set the Nesterov momentum 0.9, weight decay 5e-4. We train the model for 150 epochs and use the one with best accuracy on the validation set. Figure 16 shows examples of our adversarial lights at different training stages. In the early stages, the model is not robust to different lighting conditions, thus small lighting perturbations are sufficient to fool the model. In the late stages, the network becomes more robust to different lightings. Thus it requires dramatic changes to fool a model or even fail to fool the model within 30 iterations.
G EVALUATE RENDERING QUALITY
We evaluated our rendering quality by whether our rendered images are recognizable by models trained on real photographs. Although large 3D shape datasets, such as ShapeNet (Chang et al., 2015), are available, they do not have have geometries or textures at the resolutions necessary to create realistic renderings. We collected 75 high-quality textured 3D shapes from cgtrader.com and turbosquid.com to evaluate our rendering quality. We augmented the shapes by changing the field of view, backgrounds, and viewing directions, then keep the configurations that were correctly classified by a pre-trained ResNet-101 on ImageNet. Specifically, we place the centroid, calculated as the weighted average of the mesh vertices where the weights are the vertex areas, at the origin and normalize shapes to range -1 to 1; the field of view is chosen to be 2 and 3 in the same unit with the normalized shape; background images include plain colors
and real photos, which have small influence on model predictions; viewing directions are chosen to be 60 degree zenith and uniformly sampled 16 views from 0 to 2π azimuthal angle. In Figure 17, we show that the histogram of model confidence on the correct labels over 10,000 correctly classified rendered images from our differentiable renderer. The confidence is computed using softmax function and the results show that our rendering quality is faithful enough to be recognized by models trained on natural images. | 1. What is the focus and contribution of the paper on generating adversary examples?
2. What are the strengths of the proposed approach, particularly in terms of its realism and flexibility?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Summary:
This work presents a method to generate adversary examples capable of fooling a neural network classifier. Szegedy et al. (2013) were the first to expose the weakness of neural networks against adversarial attacks, by adding a human-imperceptible noise to images to induce misclassification. Since then, several works tackled this problem by modifying the image directly in the pixel space: the norm-balls convention. The authors argue that this leads to non-realistic attacks and that a network would not benefit from training with these adversarial images when performing in the real world. Their solution and contributions are parametric norm-balls: unlike state-of-the-art methods, they perform perturbations in the image formation space, namely the geometry and the lighting, which are indeed perturbations that could happen in real life. For that, they defined a differentiable renderer by making some assumptions to simplify its expression compared to solving a light transport equation. The main simplifications are the direct illumination to gain computation efficiency and the distant illumination and diffuse material assumptions to represent lighting in terms of spherical harmonics as in Ramamoorthi et al. (2001), which require only 9 parameters to approximate lighting. This allows them to analytically derivate their loss function according to the geometry and lighting and therefore generate their adversary examples via gradient descent. They show that their adversary images generalize to other classifiers than the one used (ResNet). They then show that injecting these images into the training set increase the robustness of WideResNet against real attacks. These real attack images were taken by the authors in a laboratory with varying illumination.
Strength:
- The proposed perturbations in the image formation space simulate the real life scenario attacks.
- The presented results show that the generated adversary images do fool the classifier (used to compute the loss) but also new classifiers (different than the one used to compute the loss). As a consequence the generated adversary images increase the robustness of the considered classifier.
- Flexibility in their cost function allows for diverse types of attacks: the same modified geometry can fool a classifier in several views, either into detecting the same object or detecting different false objects under different views.
Major comments:
- Method can only compute synthetic adversary examples, unlike state-of-the-art.
- The main contribution claimed by the author is that their perturbations are realistic and that it would help better increase the robustness of classifiers against real attacks. However, they do not give any comparison to the state-of-the-art methods as is expected.
Minor comments:
- Even if the paper is well written, they are still some typos. |
ICLR | Title
Collaborative Three-Stream Transformers for Video Captioning
Abstract
As the most critical components in a sentence, subject, predicate, and object require special attention in the video captioning task. In this paper, we design the collaborative three-stream transformers to model the interactions of objects, and the actions/relations of objects between different modalities. Specifically, it is formed by three branches of transformers used to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain between videos and text, detected objects and text, and actions and text. Meanwhile, we design a cross-modality attention module to align the interactions modeled by the three branches of transformers. That is, an affinity matrix is computed to help align visual modalities by injecting the information from other interactions. In this way, the three branches of transformers can support each other to exploit the most discriminative semantic information in different modalities for accurate predictions of captions, especially for the subject, predicate, and object parts in a sentence. The whole model is trained in an end-to-end fashion. Extensive experiments conducted on two large-scale challenging datasets, i.e., YouCookII and ActivityNet Captions, demonstrate that the proposed method performs favorably against the state-of-the-art methods.
1 INTRODUCTION
Video captioning aims to generate natural language descriptions of video content, which attracts much attention in recent years along with the rapidly increasing amount of videos recorded in daily life. It helps blind or D/deaf people are able to enjoy the videos. However, as noted in (Xiong et al., 2018; Park et al., 2019; Lei et al., 2020), it is very challenging to generate natural paragraph descriptions due to the difficulties of having relevant, less redundant, and semantic coherent sentences.
Recently, researchers attempt to use the transformer model to solve the video captioning task (Vaswani et al., 2017; Dai et al., 2019; Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021), which relies on the self-attention mechanism to describe the interactions between different modalities of the input data, such as video, audio, and text. In practice, the aforementioned methods generally concatenate the features extracted from individual modalities, or use self-attention to model the interactions between extracted features. Although they advance the state-of-the-art of video captioning, it is still far from satisfactory in real applications due to the domain gap between different modalities. Then, a question arises, “How do we fill in the domain gap and capture the interactions among visual and linguistic modalities for video captioning?”
Before answering this question, let us see the basic grammar rules at first. Generally, a sentence (Krishna et al., 2017a; Zhou et al., 2018a) is presented as the following form, i.e.,
Women wear Arabian skirts on a stage.
Notably, Subject, Object, and Predicate are the three most critical elements in a sentence. They indicates the objects, the actions of objects, and the interactions among different objects. To improve the accuracy of the Subject, Object, and Predicate predictions in a sentence, we propose a new COllaborative three-Stream Transformers (COST) to model the visual-linguistic interactions of different granularities in spatio-temporal domain between different modalities. Specifically, the COST model is formed by three branches of transformers, including the Video-Text, Detection-Text, and
Action-Text transformers. The Video-Text transformer is used to model the interactions between the global video appearances and linguistic texts. The Detection-Text is used to model the interactions between the objects in individual video frames, which enforces the model to focus on the objects being aligned in the visual and linguistic modalities, i.e., indicating the Subjects and Objects in caption sentences. The Action-Text transformer is designed to model the actions/relations of objects between the visual and linguistic modalities, i.e., indicating the Predicate in caption sentences. Meanwhile, to align the interactions modeled by the three branches of transformers, we introduce a cross-modality attention model. In particular, an affinity matrix is computed to represent the relevance among visual modalities and help inject the information from other interactions. In this way, different branches of transformers support each other to exploit the most discriminative semantic information in different modalities, and enforce the model to pay more attention on generating the accurate Subject, Object and Predicate predictions. The whole model is trained in an end-to-end fashion using the Adam algorithm (Kingma & Ba, 2015).
Several experiments are conducted on two publicly challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a), to demonstrate the superior performance of the proposed method compared to the state-of-the-art methods (Zhou et al., 2018b; Dai et al., 2019; Park et al., 2019; Lei et al., 2020). Specifically, our COST method achieves the top CIDEr scores, i.e., 60.78% and 29.64%, on the YouCookII val set and the ActivityNet ae-test set, improving 3.54% and 1.45% compared to the state-of-the-arts.
The main contributions of this paper are summarized as follows. (1) We develop the new collaborative three-stream transformers to learn the interactions between the visual-linguistic modalities of different granularities in spatio-temporal domain, which enforces the model to generate the accurate Subject, Object, and Predicate predictions. (2) To align the interactions described in the three branches of transformers, we design a cross-modality attention model. (3) Extensive experiments conducted on two challenging datasets show that our method performs favorably against the stateof-the-art methods.
2 RELATED WORKS
Network architecture. Video captioning has received much attention in recent years. Most of previous video captioning methods (Yu et al., 2016; Pan et al., 2017; Zhang et al., 2018) attempt to perform sequence-to-sequence learning using the encoder-decoder paradigm (Chen et al., 2019). The convolutional neural networks (CNNs) (Zheng et al., 2020) and long short term memory networks (LSTM) (Pei et al., 2019; Park et al., 2019) are adopted to learn discriminative feature embeddings for accurate predictions. Recently, the transformer model dominates this field due to its superior performance compared with CNNs and LSTM. Zhou et al. (2018b) propose an end-to-end trained transformer model, where the encoder is designed to encode the video into semantic representations, and the proposal decoder is used to decode from the encoding with different anchors to form video event proposals. Sun et al. (2019) design the VideoBERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens. Similarly, Lu et al. (2019) extend the BERT architecture (Devlin et al., 2019) to a multi-modal two-stream model, which processes both visual and textual inputs separately but interactions are conducted between streams. Lei et al. (2020) develop the memory-augmented recurrent transformer, which uses a highly summarized memory state from the video clips and the sentence history to facilitate better prediction of the next sentence. Different from the aforementioned methods, our method attempts to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain across different modalities, with a focus on the most critical components in the sentence, i.e., subject, predicate and object.
Multi-modal cross-attention mechanism. The interactions between different modalities are critical for the video captioning task. Recent transformer based methods (Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021) use the cross-attention module to learn correlations across different modalities. For example, Iashin & Rahtu (2020) concatenate the learned embeddings from multiple modalities, e.g., video, audio and speech, for event description. Zhu & Yang (2020) propose the tangled transformer block to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered by exploiting the contextual information. Tang et al. (2021) use frame-level dense captions as an auxiliary text input for better video and language associations, where the constrained attention loss is used to forces
the model to automatically focus on the best matched caption from a pool of misalignment caption candidates. In contrast, we design the cross-modality attention module integrated in the collaborative three-stream transformers to align three types of visual-linguistic interactions, leading to more discriminative semantic cues for better caption generation.
Multi-modal pre-training models. Large-scale pre-training is another effective way to improve the accuracy of captioning models. Specifically, the jointly trained video and language models (Huang et al., 2020; Luo et al., 2020; Ging et al., 2020) on the large-scale datasets, such as YouTube-8M (Abu-El-Haija et al., 2016) and HowTo100M (Miech et al., 2019) with automatic speech recognition1 transcripts, provide discriminative features for downstream tasks. Huang et al. (2020) construct a dense video captioning dataset, i.e., Video Timeline Tags (ViTT), and explore several multi-modal sequence-to-sequence pre-raining strategies using transformers (Vaswani et al., 2017). Luo et al. (2020) also use transformers (Vaswani et al., 2017) with two single-modal encoders, a cross encoder, and a decoder. Recently, Ging et al. (2020) develop the Cooperative hierarchical Transformer (COOT) to model the interactions between different levels of granularity and modalities, which achieves superior results for video captioning.
3 OUR APPROACH
As discussed above, we design the collaborative three-stream transformers to model the interactions of objects, and actions/relations of objects between different modalities, which is formed by three branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text transformers. Specifically, the video and text inputs are firstly encoded to extract the multi-modal feature embeddings. After that, the embeddings are fed into the three-stream transformers to exploit the visual-linguistic interactions between videos and text of different granularities in spatio-temporal domain, i.e., global videos, detections, and actions. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers. The overall architecture of the proposed method is shown in Figure 1.
3.1 MULTI-MODALITY TOKENS
Three kinds of tokens, i.e., visual tokens, linguistic tokens, and special tokens, are used to encode the video and text inputs, which are described as follows. Visual tokens. For the visual tokens, we use three kinds of tokens with different granularities in spatio-temporal domain, that is the video tokens, the detection tokens, and the action tokens.
• Video tokens provide the global semantic information in the video sequence. In contrast to (Lei et al., 2020), we only use the appearance features extracted by Temporal Segment Networks (TSN) (Wang et al., 2016) (denoted as TSN-APP is Figure 1) as the video tokens, i.e., {fv1 , fv2 , · · · , fvNv}, where f v i is the extracted feature of the i-th video clip, and Nv is
the number of video clips. Notably, we can also leverage more powerful multi-modal feature extraction method COOT (Ging et al., 2020) to improve the performance, which is pre-trained on the large-scale HowTo100M dataset (Miech et al., 2019).
• Detection tokens are used to enforce the model to focus on the Subjects or Objects in caption sentences. Similar to (Park et al., 2019; Lu et al., 2019; Zhu & Yang, 2020), we use the Faster R-CNN method to detect the objects in each frame, which is pre-trained on the Visual Genome dataset (Krishna et al., 2017b). After that, the detection features in Faster R-CNN corresponding to the objects with the highest confidence scores in K categories2 are used to generate the detection tokens for each frame. We use {fd1 , fd2 , · · · , fdNd} to denote the set of detection tokens, where fdi is the i-th detection feature, and Nd is the total number of detections in the video sequence.
• Action tokens are designed to enforce the model to concentrate on the Predicates in caption sentences. Following (Lei et al., 2020), the optical flow features of video sequences are extracted by TSN (Wang et al., 2016) (denoted as TSN-MOT is Figure 1) to generate the
1https://developers.google.com/youtube/v3/docs/captions 2If the category number of the detected objects is less than K in a frame, we select the K detected objects
with the highest confidence scores regardless the object categories to generate the detection tokens.
action tokens, which are used to describe the actions/relations of objects. The action tokens are denoted as {fa1 , fa2 , · · · , faNa}, where f a i is the motion feature of the i-th video clip, and Na is the total number of action tokens.
Linguistic tokens. We break down the captions of video sequences into individual words and compute the corresponding linguistic tokens using the GloVe model (Pennington et al., 2014). The linguistic tokens are denoted as {f t1, f t2, · · · , f tNt}, where f t i is the extracted features of the i-th word using the GloVe model, and Nt is the total number of words.
Special tokens. Besides the aforementioned tokens, we also introduce two kinds of special tokens in transformer, similar to BERT (Devlin et al., 2019). The first one is the modality type token [CLS], which is added at the beginning of visual features to denote which modality the following tokens come from. The second one is the three kinds of separation token, i.e., [SEP], [BOS], and [EOS]. [SEP] is used at the end of the visual tokens to separate them from the linguistic tokens, [BOS] is used to denote the beginning of linguistic tokens, and [EOS] is used to denote the ending of the linguistic tokens, respectively. In addition, we use a fully-connected layer to encode the aforementioned tokens in the same dimension. Thus, the inputs for the three-stream transformer are computed as {
[CLS(·)], f (·)1 , f (·) 2 , · · · , f (·) N(·) , [SEP], [BOS], f t1, · · · , f tNt , [EOS] } , (1)
where (·) ∈ {v, d, a} indicates the video, detection, and action tokens, respectively. We also use the positional encoding strategy (Vaswani et al., 2017) in the Video-Text, Detection-Text, and ActionText transformers to describe the order information of caption sentences.
3.2 THREE-STREAM TRANSFORMERS
As shown in Figure 1, we feed the aforementioned tokens into the three-stream transformers. The Video-Text, Detection-Text, and Action-Text branches are formed by S basic blocks, and each block consists of a self-attention module and a cross-modality attention module. Both the self-attention and cross-modality modules are followed by a feed forward layer.
Self-attention module. The self-attention module is designed to model the visual-linguistic alignments in different branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text. Following (Vaswani et al., 2017), we compute the attention function between different tokens as follows.
A(Q,K ,V ) = softmax (QK T√
d
) V , (2)
where Q ∈ RN×d is the query matrix, K ∈ RN×d is the key matrix, V ∈ RN×d is the value matrix, and N and d are the number of tokens and the dimension of embeddings, respectively. We advocate h paralleled heads of scaled dot-product attentions to increase the diversity.
Cross-modality attention module. Besides the self-attention module, we use the cross-modality attention module to align the interactions modeled by the three branches of transformers. Specifically, we compute the affinity matrix to guide the alignments between different interactions by injecting the information from other branches of transformers. Given the feature embeddings H, X and Y from the Video-Text, Detection-Text, and Action-Text transformers, we first calculate the corresponding affinity matricesMH,MX , andMY using the dot-product operation followed by one FC layer and ReLU activation. Here, as shown in Figure 1, we take the affinity matrixMY ∈ RNa×(Nv+Nd) as an example, which is computed as
MY = softmax ( ReLU ( FCY ( Y ( ⊕ (H,X )T )))) , (3)
where indicates the dot product, and ⊕(·, ·) denotes the concatenation operation. We estimate the modality-wise normalization scores of the interactions using a softmax layer.MY(i, j) denotes the normalized interaction score between the i-th entity in the Action-Text embeddings and the j-th entity in the Video-Text and Detection-Text embeddings. Based on the matrix, the feature embeddings of the Action-Text transformer can inject information from other branches of transformers, i.e.,
Y ′ = FFN ( MY ( ⊕ (H,X ) )) , (4)
where indicates the dot product, and FFN denotes the feed forward layer. Notably, we apply the cross-modality attention in all blocks for each branch of transformers. In this way, we can align the captions with the video, detection and action entities to enhance the discriminative representation for video captioning. It is noteworthy that we leverage the video-text features in history to obtain the long-term sentence-level recurrence to generate the next sentences (Lei et al., 2020).
3.3 OPTIMIZATION
We use the multi-task loss to guide the training of our COST method, which is formed by three terms, i.e., Lv(·, ·) for the Video-Text transformer, Ld(·, ·) for the Detection-Text transformer, and La(·, ·) for the Action-Text transformer, i.e.,
L = Lv(`v, [CLSv]) + λd · Ld(`d, [CLSd]) + λa · La(`a, [CLSa]), (5)
where λd and λa are the preset parameters used to balance those three terms. Lv(·, ·) is the crossentropy loss used to penalize the errors of the predicted captions comparing to the ground-truth descriptions. The ground-truth of video tokens `v are the indexes of words in caption sentences. Ld(·, ·) is also the cross-entropy loss used to penalize the errors of the predicted categories of objects comparing to the pseudo category labels generated by the Faster R-CNN detector3. La(·, ·) is the multi-label classification loss used to handle multiple actions appearing in one video sequence. Specifically, we first aggregate all action tokens as the modality type token [CLSa] and then compute
3Notably, we do not use the annotated objects for model training, but use the pseudo category label `d generated by the Faster R-CNN detector as the ground-truth label to enforce the network to maintain the original encoded semantic information of detector.
the confidence score using a fully-connected layer, i.e., FC([CLSa]). Following (Zhang et al., 2021; Sun et al., 2020), the loss function is computed as
La(`a,CLSa) = log ( 1 + ∑ i∈Ωpos(`a) e−si ) + log ( 1 + ∑ j∈Ωneg(`a) esj ) , (6)
which penalizes the confidence scores si of the detected actions Ωpos(`a) in video sequences are less than the threshold 0 while the confidence scores sj of undetected actions Ωneg(`a) are larger than 0. The ground-truth action labels `a are the most common verbs in caption sentences, which are retrieved by the off-the-shelf part-of-speech tagger method (Sun et al., 2019).
4 EXPERIMENTS
4.1 DATASETS AND EVALUATION METRICS
Datasets. We conducted several experiments on two challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a). YouCookII includes 2, 000 long untrimmed videos describing 89 cooking recipes, where each video contains one reference paragraph and is further split into several event segments with annotated sentences. 1, 333 and 457 video sequences are used for training and validation, respectively. Meanwhile, ActivityNet Captions is a large-scale dataset formed by 10, 009 videos for training and 4, 917 videos for validation and testing. Notably, following (Zhou et al., 2019), the original validation set is split into the ae-val subset with 2, 460 videos for validation and the ae-test subset with 2, 457 videos for testing.
Evaluation metrics. Similar to (Park et al., 2019; Lei et al., 2020; Zhu & Yang, 2020), we use several standard metrics to evaluate our method, including BLEU@n (B@n) (Papineni et al., 2002) for n-gram precision, METEOR (M) (Denkowski & Lavie, 2014) for n-gram with synonym matching, CIDEr (C) (Vedantam et al., 2015) for tf -idf weighted n-gram similarity, and ROUGE (R@4) (Xiong et al., 2018; Park et al., 2019) for n-gram recall. Notably, two evaluation modes are considered, i.e., micro-level and paragraph-level. The micro-level evaluation reports the average score on all video sequences; while the paragraph-level evaluation first concatenates the caption sentences of all video sequences and then computes the scores averaged across all videos based on the groundtruth paragraph caption sentences. In both modes, CIDEr is used as the primary metric for ranking.
4.2 IMPLEMENTATION DETAILS
Our COST algorithm is implemented using PyTorch. The source code will be released after acceptance. All the experiments are conducted on a machine with 2 NVIDIA RTX-3090 GPUs. We train the model using the strategies similar to BERT (Devlin et al., 2019). Specifically, we use Adam (Kingma & Ba, 2015) with an initial learning rate of 1e − 4, β1=0.9, β2=0.999, L2 weight decay of 0.01, and the learning rate warmup over the first 2 epochs. We use the early-stop strategy in the model training phase. That is, we train the model at most 20 epochs with the batch size 64. For each branch of transformers, we set the dimension of the feature embeddings d = 768, the number
of transformer blocks S = 2, and the number of attention heads h = 12. The loss weights λa and λd in equation 5 are set to 2.0 and 0.02 empirically.
Considering the trade-off between accuracy and complexity, we extract 2048-dim video features, top K = 5 2048-dim detection features and 1024-dim action features from at most 100 frames uniformly sampled from the video sequences, i.e., Nv = Na = 102, Nd = 502 with the two special tokens [CLS] and [SEP]. Meanwhile, we exploit the first 20 words in the caption sentences and compute the 300-dim GloVe features, i.e., Nt = 22 with the two special tokens [BOS] and [EOS]. For the COOT features, we concatenate the local clip-level (384-dim) and the global video-level (768-dim) features to describe the videos. After the fully-connected layer, all the tokens are converted into the 768-dim features.
4.3 EVALUATION RESULTS
We compare the proposed COST method with the state-of-the-art methods on the two challenging datasets, i.e., YouCookII and ActivityNet Captions, in Table 1. As shown in Table 1, our method achieves the best results on the YouCookII val subset and the ActivityNet Captions ae-test subset. Notably, without using the COOT features, our method improves near 10% CIDEr score compared to the second best method, i.e., MART (Lei et al., 2020) on the YouCookII val subset. This is attributed to the proposed cross-modality attention module across the collaborative three-stream transformers. Besides, our method exploits the local appearance information from the Detection-Text transformer for better accuracy. Using the COOT features, the overall video captioning results are significantly improved, and our COST method also performs favorably against other algorithms by improving over 3% CIDEr score. We observe that the similar trend appears in the ActivityNet Captions ae-test subset. Although the ActivityNet Captions dataset is more challenging than the YouCookII dataset, our method improves the performance considerably compared to the state-of-the-art methods.
As presented in Table 2, we also compare the proposed method to the LSTM based methods with input detections on the ActivityNet Captions ae-val subset. Compared to AdvInf (Park et al., 2019), our method produces higher scores for both B@4 and CIDEr, demonstrating the superiority of the collaborative transformers to learn multi-model representations over LSTM. Meanwhile, MART (Lei et al., 2020) without input detections performs inferior than our method in terms of B@4, M
and C metrics. It indicates that the Detection-Text transformer in our network can describe the Subjects and Objects in caption sentences more accurately.
According to Table 3, our method outperforms several BERT based methods in terms of the microlevel evaluation. In particular, the second best ActBERT (Zhu & Yang, 2020) relies on the same visual modalities as our method. However, they directly focus on the learning the alignment between text and other visual modalities, which is difficult to exploit the discriminative semantic information. In contrast, our cross-modality attention module learns from three visual-linguistic interactions using the three-stream transformers, producing better CIDEr score. It is worth mentioning that they use BERT (Devlin et al., 2019) and S3D (Xie et al., 2017) to generate more powerful text and video features than that generated by GloVe (Pennington et al., 2014) and TSN (Wang et al., 2016) in our method.
Furthermore, the qualitative results of our COST method and MART (Lei et al., 2020) are shown in Figure 2. It can be seen that our COST generates more accurate captions than MART (Lei et al., 2020). This is attributed to two reasons. First, using the Detection-Text transformer, the Objects in caption sentences can be learned explicitly (e.g., shrimp, hand, and windows) or implicitly (e.g., from bowl to food processor, and from pan to wok). Second, the Action-Text transformer in our method can extract the key verbs in caption sentences such as add, blend, put and hold. MART (Lei et al., 2020) fails to recognize these key Subjects, Objects and Verbs. The results indicate that these two branches of transformers can enforce the network learn the key elements in the caption sentences.
4.4 ABLATION STUDY
To study the influence of different components in the proposed method, we conduct the ablation study on the YouCookII val subset, shown in Table 4. Notably, we use the appearance features extracted by TSN (Wang et al., 2016) in all COST-k variants, where k denotes the number of transformer branches retained in our COST method.
Effectiveness of multi-modal features. To verify the effectiveness of the multi-modal features, we construct three COST-1 variants, i.e., COST-1 (v), COST-1 (v+a) and COST-1 (v+a+d). In particular,
we only use the Video-Text transformer, but change the input features as the combinations of the GloVe text features and the concatenated features from video (v), action (a) and detection (d). As shown in Table 4, the scores under all metrics are improved considerably by integrating the action or detection features. Moreover, the CIDEr score is boosted from 29.20% to 38.63%, if we include all the three-modal features. It indicates that multi-modal features definitely facilitate to generate more accurate video captions.
Effectiveness of three-stream transformers. To demonstrate the effectiveness of the three-stream transformers compared to the simple feature concatenation used in COST-1, we construct three COST-k (k = 2, 3) variants, shown in Figure 1. It can be seen that the accuracy can be improved by using the three-stream transformers, i.e., the CIDEr score improved from 38.63% to 45.54%. Meanwhile, the accuracy is considerably improved by using more branches of transformers. This is because our cross-modality attention module is able to capture the most relevant semantic information from different visual-linguistic interactions. Compared to the COST-1 variants, our full model can obtain a more discriminative feature representations for video captioning.
Visualization of affinity matrix. To better understand the cross-modality attention module, we use the heapmap to visualize the affinity matrix MH ∈ RNv×(Nd+Na) of the Video-Text transformer in Figure 3. At epoch 1, all values in the affinity matrixMH are similar and the maximal value in MH is only 0.001. The corresponding caption results are noisy with the false predictions of nouns and verbs, e.g., pan instead of butter. It indicates that the affinity matrix is randomly initialized and interactions between different modalities are not aligned. After training for several epochs, a few entities dominate the affinity matrix with the maximal value of 0.3 (see the bright vertical lines in the heatmap). It indicates that our cross-modality attention module can successfully exploit the most relevant entities from other modalities to inject the information from other branches of transformers. In this way, the verb stir and the noun pan can be corrected to mix and butter for more accurate caption results.
5 CONCLUSION
In this paper, we propose the collaborative three-stream transformers to exploit the interactions of objects, and the actions/relations of objects between different modalities of different granularities in spatio-temporal domain. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers, which focuses on improving the prediction accuracies of the Subject, Object, and Predicate in the caption sentences. Several experiments conducted on the YouCookII and ActivityNet Captions datasets demonstrate the effectiveness of the proposed method. | 1. What is the main contribution of the paper on video captioning?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its performance compared to SOTA methods and the novelty of the approach?
3. How does the reviewer assess the relevance and sufficiency of the ablation experiments conducted by the authors?
4. What are some existing works and learning paradigms related to multimodal transformer-based models that the author should discuss?
5. Are there any concerns regarding the problem statement, method descriptions, training, inference, and implementation in the paper? | Summary Of The Paper
Review | Summary Of The Paper
Based on the intuition that learning <Subject, Predicate, Object> is important for video captioning, the authors design the COllaborative three-Stream Transformers (COST) with 3 Transformer-based branches/streams: (1) the Video-Text branch that models the interactions among the global clip appearance tokens and the word-level text tokens, in order to predict video captions, (2) the Detection-Text branch that models the interactions among the object tokens and the text tokens, in order to learn Subjects and Objects, and (3) the Action-Text branch that models the interactions among the action tokens (computed from optical flow-based features) and the text tokens, in order to learn Predicate.
Apart from the contribution of designing COST, the key novelty is to enforce the model to generate the accurate Subject, Object, and Predicate predictions while predicting the video captions. A cross-modality attention module is proposed and placed in each Transformer branch to allow for the information flow across the Transformers in the 3 branches and to allow for alignment. The 3 Transformer branches thus collaborate and support each other. Experiments conducted on the YouCook2 dataset and the ActivityNet Captions dataset demonstrate the proposed method performs favorably against SOTA methods.
Review
Strengths:
[+] The motivation to enhance the learning of Subject, Predicate and Object for video captioning while capturing the interactions among tokens in different granularities and modalities (visual and text) is valid.
[+] The ideas are simple and the proposed model is effective to some extent. The qualitative results are interesting.
[+] The authors have promised to release the source code.
Weaknesses:
[-] The proposed method seems to be not better than SOTA methods - the conclusions mentioned in Section 4.3 are not so convincing. Since the proposed method uses additional object features which some of the baselines (e.g. MART) does not use, comparing the MART’s result on YouCook2 val subset in Table 1 and COST-2(v+a) result in Table 4, the proposed method seems to perform worse than MART when removing the detected object features. This makes me question whether the improvement mostly comes from the extra object features.
[-] The ablation/comparative experiments are lacking to demonstrate the benefits of proposed method, or to distinguish it from existing works. From the current ablation studies, there is no way to tell the benefits of the proposed Cross-Modality Attention module. What would the results be if we simply replace the Cross-Modality Attention in Figure 1 with Self-Attention over M (M can be H, X or Y) without any cross stream interactions (but still use the exactly same three-stream architecture)? What if instead of using the proposed Cross-Modality Attention, we use the vanilla Cross-Attention (i.e., if we take the Action-Text Transformer as an example, we map Y to the query, and map H and X to keys and values to do the vanilla Cross-Attention)?
[-] Overlook of existing works and the novelty seems to be limited. There are many existing multimodal Transformer based models and learning paradigms; please see Figure 2 in [1] where the “Share Type”, “Cross Type” and “Joint Type” learning paradigms are mentioned. To my understanding, the proposed method in the paper basically belongs to the "Share Type" paradigm in terms of how multi modalities (visual+text) are used, and also belongs to the “Cross Type” paradigm in terms of multi-stream learning. Since the three-stream COST architecture in Figure 1 is another key novelty of the paper, could you provide evidence to fairly show that the proposed architecture is indeed better than the existing popular paradigms and it is indeed novel? It would also be great to briefly discuss these related work and paradigms.
[1] Luo, Huaishao, Lei Ji, Botian Shi, Haoyang Huang, Nan Duan, Tianrui Li, Jason Li, Taroon Bharti, and Ming Zhou. "Univl: A unified video and language pre-training model for multimodal understanding and generation."
[-] The descriptions of the problem statement, method, training, inference and implementation are not clear.
I understand that each video sequence is sampled by 100 frames, but why
N
v
is 100? In Section 3.1,it is mentioned that
N
v
is the number of clips, instead of the number of frames in the video sequence.
Is it correct to say that
N
a
is the total number of action tokens? It causes confusion because I assumed it is the number of unique predicates (verbs) in the dataset. From descriptions in Section 3.1, I have no idea what “total number of action tokens” means, and it seems that
N
a
is actually the number of frames sampled from the video sequence according to Section 4.2. Why should the number of action tokens be equal to the number of frames?
What is
Ω
in Equa. (6)? Just to make sure of reproducibility, I assume the most common verbs means the top
k
frequent verbs in the clip captions of the video sequence, then what is
k
that is used for experiments?
Could you elaborate on “we leverage the video-text features in history to obtain the long-term sentence-level recurrence to generate the next sentences”?
To compute the cross entropy loss that penalizes the errors of predicted captions compared to the ground truth, is the
C
L
S
v
token used at all? How are captions predicted exactly given the learned representations of tokens in the sequence of the Video-Text branch? The same questions apply to the object category prediction in the Detection-Text branch.
For the multi-label classification for predicting the multiple actions present in the video sequence, learned representations of all action tokens are aggregated as the
C
L
S
a
token and then sent to a fully-connected (FC) layer to obtain the confidence score. This aggregation+FC based processing is only true for the action/predicate prediction, is that right? What happens to the learned representation of the input
C
L
S
a
then? Why is using aggregation to obtain a new
C
L
S
a
token necessary and why not directly use the learned representation of the original
C
L
S
a
token?
Does every token in Equa. (1) have a unique position? The set of objects in the same frame would have different ordered positions, is that so? |
ICLR | Title
Collaborative Three-Stream Transformers for Video Captioning
Abstract
As the most critical components in a sentence, subject, predicate, and object require special attention in the video captioning task. In this paper, we design the collaborative three-stream transformers to model the interactions of objects, and the actions/relations of objects between different modalities. Specifically, it is formed by three branches of transformers used to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain between videos and text, detected objects and text, and actions and text. Meanwhile, we design a cross-modality attention module to align the interactions modeled by the three branches of transformers. That is, an affinity matrix is computed to help align visual modalities by injecting the information from other interactions. In this way, the three branches of transformers can support each other to exploit the most discriminative semantic information in different modalities for accurate predictions of captions, especially for the subject, predicate, and object parts in a sentence. The whole model is trained in an end-to-end fashion. Extensive experiments conducted on two large-scale challenging datasets, i.e., YouCookII and ActivityNet Captions, demonstrate that the proposed method performs favorably against the state-of-the-art methods.
1 INTRODUCTION
Video captioning aims to generate natural language descriptions of video content, which attracts much attention in recent years along with the rapidly increasing amount of videos recorded in daily life. It helps blind or D/deaf people are able to enjoy the videos. However, as noted in (Xiong et al., 2018; Park et al., 2019; Lei et al., 2020), it is very challenging to generate natural paragraph descriptions due to the difficulties of having relevant, less redundant, and semantic coherent sentences.
Recently, researchers attempt to use the transformer model to solve the video captioning task (Vaswani et al., 2017; Dai et al., 2019; Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021), which relies on the self-attention mechanism to describe the interactions between different modalities of the input data, such as video, audio, and text. In practice, the aforementioned methods generally concatenate the features extracted from individual modalities, or use self-attention to model the interactions between extracted features. Although they advance the state-of-the-art of video captioning, it is still far from satisfactory in real applications due to the domain gap between different modalities. Then, a question arises, “How do we fill in the domain gap and capture the interactions among visual and linguistic modalities for video captioning?”
Before answering this question, let us see the basic grammar rules at first. Generally, a sentence (Krishna et al., 2017a; Zhou et al., 2018a) is presented as the following form, i.e.,
Women wear Arabian skirts on a stage.
Notably, Subject, Object, and Predicate are the three most critical elements in a sentence. They indicates the objects, the actions of objects, and the interactions among different objects. To improve the accuracy of the Subject, Object, and Predicate predictions in a sentence, we propose a new COllaborative three-Stream Transformers (COST) to model the visual-linguistic interactions of different granularities in spatio-temporal domain between different modalities. Specifically, the COST model is formed by three branches of transformers, including the Video-Text, Detection-Text, and
Action-Text transformers. The Video-Text transformer is used to model the interactions between the global video appearances and linguistic texts. The Detection-Text is used to model the interactions between the objects in individual video frames, which enforces the model to focus on the objects being aligned in the visual and linguistic modalities, i.e., indicating the Subjects and Objects in caption sentences. The Action-Text transformer is designed to model the actions/relations of objects between the visual and linguistic modalities, i.e., indicating the Predicate in caption sentences. Meanwhile, to align the interactions modeled by the three branches of transformers, we introduce a cross-modality attention model. In particular, an affinity matrix is computed to represent the relevance among visual modalities and help inject the information from other interactions. In this way, different branches of transformers support each other to exploit the most discriminative semantic information in different modalities, and enforce the model to pay more attention on generating the accurate Subject, Object and Predicate predictions. The whole model is trained in an end-to-end fashion using the Adam algorithm (Kingma & Ba, 2015).
Several experiments are conducted on two publicly challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a), to demonstrate the superior performance of the proposed method compared to the state-of-the-art methods (Zhou et al., 2018b; Dai et al., 2019; Park et al., 2019; Lei et al., 2020). Specifically, our COST method achieves the top CIDEr scores, i.e., 60.78% and 29.64%, on the YouCookII val set and the ActivityNet ae-test set, improving 3.54% and 1.45% compared to the state-of-the-arts.
The main contributions of this paper are summarized as follows. (1) We develop the new collaborative three-stream transformers to learn the interactions between the visual-linguistic modalities of different granularities in spatio-temporal domain, which enforces the model to generate the accurate Subject, Object, and Predicate predictions. (2) To align the interactions described in the three branches of transformers, we design a cross-modality attention model. (3) Extensive experiments conducted on two challenging datasets show that our method performs favorably against the stateof-the-art methods.
2 RELATED WORKS
Network architecture. Video captioning has received much attention in recent years. Most of previous video captioning methods (Yu et al., 2016; Pan et al., 2017; Zhang et al., 2018) attempt to perform sequence-to-sequence learning using the encoder-decoder paradigm (Chen et al., 2019). The convolutional neural networks (CNNs) (Zheng et al., 2020) and long short term memory networks (LSTM) (Pei et al., 2019; Park et al., 2019) are adopted to learn discriminative feature embeddings for accurate predictions. Recently, the transformer model dominates this field due to its superior performance compared with CNNs and LSTM. Zhou et al. (2018b) propose an end-to-end trained transformer model, where the encoder is designed to encode the video into semantic representations, and the proposal decoder is used to decode from the encoding with different anchors to form video event proposals. Sun et al. (2019) design the VideoBERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens. Similarly, Lu et al. (2019) extend the BERT architecture (Devlin et al., 2019) to a multi-modal two-stream model, which processes both visual and textual inputs separately but interactions are conducted between streams. Lei et al. (2020) develop the memory-augmented recurrent transformer, which uses a highly summarized memory state from the video clips and the sentence history to facilitate better prediction of the next sentence. Different from the aforementioned methods, our method attempts to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain across different modalities, with a focus on the most critical components in the sentence, i.e., subject, predicate and object.
Multi-modal cross-attention mechanism. The interactions between different modalities are critical for the video captioning task. Recent transformer based methods (Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021) use the cross-attention module to learn correlations across different modalities. For example, Iashin & Rahtu (2020) concatenate the learned embeddings from multiple modalities, e.g., video, audio and speech, for event description. Zhu & Yang (2020) propose the tangled transformer block to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered by exploiting the contextual information. Tang et al. (2021) use frame-level dense captions as an auxiliary text input for better video and language associations, where the constrained attention loss is used to forces
the model to automatically focus on the best matched caption from a pool of misalignment caption candidates. In contrast, we design the cross-modality attention module integrated in the collaborative three-stream transformers to align three types of visual-linguistic interactions, leading to more discriminative semantic cues for better caption generation.
Multi-modal pre-training models. Large-scale pre-training is another effective way to improve the accuracy of captioning models. Specifically, the jointly trained video and language models (Huang et al., 2020; Luo et al., 2020; Ging et al., 2020) on the large-scale datasets, such as YouTube-8M (Abu-El-Haija et al., 2016) and HowTo100M (Miech et al., 2019) with automatic speech recognition1 transcripts, provide discriminative features for downstream tasks. Huang et al. (2020) construct a dense video captioning dataset, i.e., Video Timeline Tags (ViTT), and explore several multi-modal sequence-to-sequence pre-raining strategies using transformers (Vaswani et al., 2017). Luo et al. (2020) also use transformers (Vaswani et al., 2017) with two single-modal encoders, a cross encoder, and a decoder. Recently, Ging et al. (2020) develop the Cooperative hierarchical Transformer (COOT) to model the interactions between different levels of granularity and modalities, which achieves superior results for video captioning.
3 OUR APPROACH
As discussed above, we design the collaborative three-stream transformers to model the interactions of objects, and actions/relations of objects between different modalities, which is formed by three branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text transformers. Specifically, the video and text inputs are firstly encoded to extract the multi-modal feature embeddings. After that, the embeddings are fed into the three-stream transformers to exploit the visual-linguistic interactions between videos and text of different granularities in spatio-temporal domain, i.e., global videos, detections, and actions. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers. The overall architecture of the proposed method is shown in Figure 1.
3.1 MULTI-MODALITY TOKENS
Three kinds of tokens, i.e., visual tokens, linguistic tokens, and special tokens, are used to encode the video and text inputs, which are described as follows. Visual tokens. For the visual tokens, we use three kinds of tokens with different granularities in spatio-temporal domain, that is the video tokens, the detection tokens, and the action tokens.
• Video tokens provide the global semantic information in the video sequence. In contrast to (Lei et al., 2020), we only use the appearance features extracted by Temporal Segment Networks (TSN) (Wang et al., 2016) (denoted as TSN-APP is Figure 1) as the video tokens, i.e., {fv1 , fv2 , · · · , fvNv}, where f v i is the extracted feature of the i-th video clip, and Nv is
the number of video clips. Notably, we can also leverage more powerful multi-modal feature extraction method COOT (Ging et al., 2020) to improve the performance, which is pre-trained on the large-scale HowTo100M dataset (Miech et al., 2019).
• Detection tokens are used to enforce the model to focus on the Subjects or Objects in caption sentences. Similar to (Park et al., 2019; Lu et al., 2019; Zhu & Yang, 2020), we use the Faster R-CNN method to detect the objects in each frame, which is pre-trained on the Visual Genome dataset (Krishna et al., 2017b). After that, the detection features in Faster R-CNN corresponding to the objects with the highest confidence scores in K categories2 are used to generate the detection tokens for each frame. We use {fd1 , fd2 , · · · , fdNd} to denote the set of detection tokens, where fdi is the i-th detection feature, and Nd is the total number of detections in the video sequence.
• Action tokens are designed to enforce the model to concentrate on the Predicates in caption sentences. Following (Lei et al., 2020), the optical flow features of video sequences are extracted by TSN (Wang et al., 2016) (denoted as TSN-MOT is Figure 1) to generate the
1https://developers.google.com/youtube/v3/docs/captions 2If the category number of the detected objects is less than K in a frame, we select the K detected objects
with the highest confidence scores regardless the object categories to generate the detection tokens.
action tokens, which are used to describe the actions/relations of objects. The action tokens are denoted as {fa1 , fa2 , · · · , faNa}, where f a i is the motion feature of the i-th video clip, and Na is the total number of action tokens.
Linguistic tokens. We break down the captions of video sequences into individual words and compute the corresponding linguistic tokens using the GloVe model (Pennington et al., 2014). The linguistic tokens are denoted as {f t1, f t2, · · · , f tNt}, where f t i is the extracted features of the i-th word using the GloVe model, and Nt is the total number of words.
Special tokens. Besides the aforementioned tokens, we also introduce two kinds of special tokens in transformer, similar to BERT (Devlin et al., 2019). The first one is the modality type token [CLS], which is added at the beginning of visual features to denote which modality the following tokens come from. The second one is the three kinds of separation token, i.e., [SEP], [BOS], and [EOS]. [SEP] is used at the end of the visual tokens to separate them from the linguistic tokens, [BOS] is used to denote the beginning of linguistic tokens, and [EOS] is used to denote the ending of the linguistic tokens, respectively. In addition, we use a fully-connected layer to encode the aforementioned tokens in the same dimension. Thus, the inputs for the three-stream transformer are computed as {
[CLS(·)], f (·)1 , f (·) 2 , · · · , f (·) N(·) , [SEP], [BOS], f t1, · · · , f tNt , [EOS] } , (1)
where (·) ∈ {v, d, a} indicates the video, detection, and action tokens, respectively. We also use the positional encoding strategy (Vaswani et al., 2017) in the Video-Text, Detection-Text, and ActionText transformers to describe the order information of caption sentences.
3.2 THREE-STREAM TRANSFORMERS
As shown in Figure 1, we feed the aforementioned tokens into the three-stream transformers. The Video-Text, Detection-Text, and Action-Text branches are formed by S basic blocks, and each block consists of a self-attention module and a cross-modality attention module. Both the self-attention and cross-modality modules are followed by a feed forward layer.
Self-attention module. The self-attention module is designed to model the visual-linguistic alignments in different branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text. Following (Vaswani et al., 2017), we compute the attention function between different tokens as follows.
A(Q,K ,V ) = softmax (QK T√
d
) V , (2)
where Q ∈ RN×d is the query matrix, K ∈ RN×d is the key matrix, V ∈ RN×d is the value matrix, and N and d are the number of tokens and the dimension of embeddings, respectively. We advocate h paralleled heads of scaled dot-product attentions to increase the diversity.
Cross-modality attention module. Besides the self-attention module, we use the cross-modality attention module to align the interactions modeled by the three branches of transformers. Specifically, we compute the affinity matrix to guide the alignments between different interactions by injecting the information from other branches of transformers. Given the feature embeddings H, X and Y from the Video-Text, Detection-Text, and Action-Text transformers, we first calculate the corresponding affinity matricesMH,MX , andMY using the dot-product operation followed by one FC layer and ReLU activation. Here, as shown in Figure 1, we take the affinity matrixMY ∈ RNa×(Nv+Nd) as an example, which is computed as
MY = softmax ( ReLU ( FCY ( Y ( ⊕ (H,X )T )))) , (3)
where indicates the dot product, and ⊕(·, ·) denotes the concatenation operation. We estimate the modality-wise normalization scores of the interactions using a softmax layer.MY(i, j) denotes the normalized interaction score between the i-th entity in the Action-Text embeddings and the j-th entity in the Video-Text and Detection-Text embeddings. Based on the matrix, the feature embeddings of the Action-Text transformer can inject information from other branches of transformers, i.e.,
Y ′ = FFN ( MY ( ⊕ (H,X ) )) , (4)
where indicates the dot product, and FFN denotes the feed forward layer. Notably, we apply the cross-modality attention in all blocks for each branch of transformers. In this way, we can align the captions with the video, detection and action entities to enhance the discriminative representation for video captioning. It is noteworthy that we leverage the video-text features in history to obtain the long-term sentence-level recurrence to generate the next sentences (Lei et al., 2020).
3.3 OPTIMIZATION
We use the multi-task loss to guide the training of our COST method, which is formed by three terms, i.e., Lv(·, ·) for the Video-Text transformer, Ld(·, ·) for the Detection-Text transformer, and La(·, ·) for the Action-Text transformer, i.e.,
L = Lv(`v, [CLSv]) + λd · Ld(`d, [CLSd]) + λa · La(`a, [CLSa]), (5)
where λd and λa are the preset parameters used to balance those three terms. Lv(·, ·) is the crossentropy loss used to penalize the errors of the predicted captions comparing to the ground-truth descriptions. The ground-truth of video tokens `v are the indexes of words in caption sentences. Ld(·, ·) is also the cross-entropy loss used to penalize the errors of the predicted categories of objects comparing to the pseudo category labels generated by the Faster R-CNN detector3. La(·, ·) is the multi-label classification loss used to handle multiple actions appearing in one video sequence. Specifically, we first aggregate all action tokens as the modality type token [CLSa] and then compute
3Notably, we do not use the annotated objects for model training, but use the pseudo category label `d generated by the Faster R-CNN detector as the ground-truth label to enforce the network to maintain the original encoded semantic information of detector.
the confidence score using a fully-connected layer, i.e., FC([CLSa]). Following (Zhang et al., 2021; Sun et al., 2020), the loss function is computed as
La(`a,CLSa) = log ( 1 + ∑ i∈Ωpos(`a) e−si ) + log ( 1 + ∑ j∈Ωneg(`a) esj ) , (6)
which penalizes the confidence scores si of the detected actions Ωpos(`a) in video sequences are less than the threshold 0 while the confidence scores sj of undetected actions Ωneg(`a) are larger than 0. The ground-truth action labels `a are the most common verbs in caption sentences, which are retrieved by the off-the-shelf part-of-speech tagger method (Sun et al., 2019).
4 EXPERIMENTS
4.1 DATASETS AND EVALUATION METRICS
Datasets. We conducted several experiments on two challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a). YouCookII includes 2, 000 long untrimmed videos describing 89 cooking recipes, where each video contains one reference paragraph and is further split into several event segments with annotated sentences. 1, 333 and 457 video sequences are used for training and validation, respectively. Meanwhile, ActivityNet Captions is a large-scale dataset formed by 10, 009 videos for training and 4, 917 videos for validation and testing. Notably, following (Zhou et al., 2019), the original validation set is split into the ae-val subset with 2, 460 videos for validation and the ae-test subset with 2, 457 videos for testing.
Evaluation metrics. Similar to (Park et al., 2019; Lei et al., 2020; Zhu & Yang, 2020), we use several standard metrics to evaluate our method, including BLEU@n (B@n) (Papineni et al., 2002) for n-gram precision, METEOR (M) (Denkowski & Lavie, 2014) for n-gram with synonym matching, CIDEr (C) (Vedantam et al., 2015) for tf -idf weighted n-gram similarity, and ROUGE (R@4) (Xiong et al., 2018; Park et al., 2019) for n-gram recall. Notably, two evaluation modes are considered, i.e., micro-level and paragraph-level. The micro-level evaluation reports the average score on all video sequences; while the paragraph-level evaluation first concatenates the caption sentences of all video sequences and then computes the scores averaged across all videos based on the groundtruth paragraph caption sentences. In both modes, CIDEr is used as the primary metric for ranking.
4.2 IMPLEMENTATION DETAILS
Our COST algorithm is implemented using PyTorch. The source code will be released after acceptance. All the experiments are conducted on a machine with 2 NVIDIA RTX-3090 GPUs. We train the model using the strategies similar to BERT (Devlin et al., 2019). Specifically, we use Adam (Kingma & Ba, 2015) with an initial learning rate of 1e − 4, β1=0.9, β2=0.999, L2 weight decay of 0.01, and the learning rate warmup over the first 2 epochs. We use the early-stop strategy in the model training phase. That is, we train the model at most 20 epochs with the batch size 64. For each branch of transformers, we set the dimension of the feature embeddings d = 768, the number
of transformer blocks S = 2, and the number of attention heads h = 12. The loss weights λa and λd in equation 5 are set to 2.0 and 0.02 empirically.
Considering the trade-off between accuracy and complexity, we extract 2048-dim video features, top K = 5 2048-dim detection features and 1024-dim action features from at most 100 frames uniformly sampled from the video sequences, i.e., Nv = Na = 102, Nd = 502 with the two special tokens [CLS] and [SEP]. Meanwhile, we exploit the first 20 words in the caption sentences and compute the 300-dim GloVe features, i.e., Nt = 22 with the two special tokens [BOS] and [EOS]. For the COOT features, we concatenate the local clip-level (384-dim) and the global video-level (768-dim) features to describe the videos. After the fully-connected layer, all the tokens are converted into the 768-dim features.
4.3 EVALUATION RESULTS
We compare the proposed COST method with the state-of-the-art methods on the two challenging datasets, i.e., YouCookII and ActivityNet Captions, in Table 1. As shown in Table 1, our method achieves the best results on the YouCookII val subset and the ActivityNet Captions ae-test subset. Notably, without using the COOT features, our method improves near 10% CIDEr score compared to the second best method, i.e., MART (Lei et al., 2020) on the YouCookII val subset. This is attributed to the proposed cross-modality attention module across the collaborative three-stream transformers. Besides, our method exploits the local appearance information from the Detection-Text transformer for better accuracy. Using the COOT features, the overall video captioning results are significantly improved, and our COST method also performs favorably against other algorithms by improving over 3% CIDEr score. We observe that the similar trend appears in the ActivityNet Captions ae-test subset. Although the ActivityNet Captions dataset is more challenging than the YouCookII dataset, our method improves the performance considerably compared to the state-of-the-art methods.
As presented in Table 2, we also compare the proposed method to the LSTM based methods with input detections on the ActivityNet Captions ae-val subset. Compared to AdvInf (Park et al., 2019), our method produces higher scores for both B@4 and CIDEr, demonstrating the superiority of the collaborative transformers to learn multi-model representations over LSTM. Meanwhile, MART (Lei et al., 2020) without input detections performs inferior than our method in terms of B@4, M
and C metrics. It indicates that the Detection-Text transformer in our network can describe the Subjects and Objects in caption sentences more accurately.
According to Table 3, our method outperforms several BERT based methods in terms of the microlevel evaluation. In particular, the second best ActBERT (Zhu & Yang, 2020) relies on the same visual modalities as our method. However, they directly focus on the learning the alignment between text and other visual modalities, which is difficult to exploit the discriminative semantic information. In contrast, our cross-modality attention module learns from three visual-linguistic interactions using the three-stream transformers, producing better CIDEr score. It is worth mentioning that they use BERT (Devlin et al., 2019) and S3D (Xie et al., 2017) to generate more powerful text and video features than that generated by GloVe (Pennington et al., 2014) and TSN (Wang et al., 2016) in our method.
Furthermore, the qualitative results of our COST method and MART (Lei et al., 2020) are shown in Figure 2. It can be seen that our COST generates more accurate captions than MART (Lei et al., 2020). This is attributed to two reasons. First, using the Detection-Text transformer, the Objects in caption sentences can be learned explicitly (e.g., shrimp, hand, and windows) or implicitly (e.g., from bowl to food processor, and from pan to wok). Second, the Action-Text transformer in our method can extract the key verbs in caption sentences such as add, blend, put and hold. MART (Lei et al., 2020) fails to recognize these key Subjects, Objects and Verbs. The results indicate that these two branches of transformers can enforce the network learn the key elements in the caption sentences.
4.4 ABLATION STUDY
To study the influence of different components in the proposed method, we conduct the ablation study on the YouCookII val subset, shown in Table 4. Notably, we use the appearance features extracted by TSN (Wang et al., 2016) in all COST-k variants, where k denotes the number of transformer branches retained in our COST method.
Effectiveness of multi-modal features. To verify the effectiveness of the multi-modal features, we construct three COST-1 variants, i.e., COST-1 (v), COST-1 (v+a) and COST-1 (v+a+d). In particular,
we only use the Video-Text transformer, but change the input features as the combinations of the GloVe text features and the concatenated features from video (v), action (a) and detection (d). As shown in Table 4, the scores under all metrics are improved considerably by integrating the action or detection features. Moreover, the CIDEr score is boosted from 29.20% to 38.63%, if we include all the three-modal features. It indicates that multi-modal features definitely facilitate to generate more accurate video captions.
Effectiveness of three-stream transformers. To demonstrate the effectiveness of the three-stream transformers compared to the simple feature concatenation used in COST-1, we construct three COST-k (k = 2, 3) variants, shown in Figure 1. It can be seen that the accuracy can be improved by using the three-stream transformers, i.e., the CIDEr score improved from 38.63% to 45.54%. Meanwhile, the accuracy is considerably improved by using more branches of transformers. This is because our cross-modality attention module is able to capture the most relevant semantic information from different visual-linguistic interactions. Compared to the COST-1 variants, our full model can obtain a more discriminative feature representations for video captioning.
Visualization of affinity matrix. To better understand the cross-modality attention module, we use the heapmap to visualize the affinity matrix MH ∈ RNv×(Nd+Na) of the Video-Text transformer in Figure 3. At epoch 1, all values in the affinity matrixMH are similar and the maximal value in MH is only 0.001. The corresponding caption results are noisy with the false predictions of nouns and verbs, e.g., pan instead of butter. It indicates that the affinity matrix is randomly initialized and interactions between different modalities are not aligned. After training for several epochs, a few entities dominate the affinity matrix with the maximal value of 0.3 (see the bright vertical lines in the heatmap). It indicates that our cross-modality attention module can successfully exploit the most relevant entities from other modalities to inject the information from other branches of transformers. In this way, the verb stir and the noun pan can be corrected to mix and butter for more accurate caption results.
5 CONCLUSION
In this paper, we propose the collaborative three-stream transformers to exploit the interactions of objects, and the actions/relations of objects between different modalities of different granularities in spatio-temporal domain. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers, which focuses on improving the prediction accuracies of the Subject, Object, and Predicate in the caption sentences. Several experiments conducted on the YouCookII and ActivityNet Captions datasets demonstrate the effectiveness of the proposed method. | 1. What is the main contribution of the paper in enhanced visual transformer for video captioning?
2. What are the strengths of the proposed approach, particularly in integrating multiple streams and cross-attention modules?
3. What are the weaknesses of the paper regarding its novelty and distinction from previous works?
4. What are some missing implementation details that need to be clarified for better understanding of the method's effectiveness?
5. How does the reviewer assess the visualization part of the paper, specifically regarding the balance between action and detection tokens? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes an enhanced visual transformer for video captioning. To be specific, the video part integrates three streams -- appearance stream using frame sequence feature, action stream using optical flow feature, object stream using object feature. Meanwhile, the three streams are further fused by cross-attention modules in order to align local-level features. The experimental results on two datasets are good compared to those of previous SOTA methods.
Review
Strength:
The paper is easy to follow.
The results are good. The ablation study is solid.
Weakness:
The novelty is limited. It’s hard to distinguish the idea of this paper from previous ones. ActBert also leverages multiple visual feature streams, including region objects and actions. Global coherence and local alignments are explored by Transformer-based ActBert, COOT. The most distinguished part would be the cross-attention among different visual streams. The previous methods may focus more on the cross-attention between visual and linguistic parts, this paper explores the cross-attention among different visual streams. However, it’s minor.
Missing implementation details, so it’s not very clear to see the contribution of each module to the performance gains. To be specific: (i) The detection model used in this paper. What is the backbone of the Faster R-CNN model? Does the model pre-trained on COCO before training using Visual Genome? ActBert only uses models trained from the COCO dataset (limited categories compared with Visual Genome), so what is the impact of the strength of the detection model on video captioning performance? (ii) How do you collect the supervision for the action stream? One guess is that the supervision comes from verbs detected from texts. But we know that actions are not equal to verbs, so do you have any further processing on the action tokens?
Some other comments
The visualization part is not very convincing. In figure 3, the column stands for action and detection tokens. But where is the boundary between action and detection? From the illustration, we can observe a strong imbalance. The left part contains the most highlighted entries while the right part is barely focused. |
ICLR | Title
Collaborative Three-Stream Transformers for Video Captioning
Abstract
As the most critical components in a sentence, subject, predicate, and object require special attention in the video captioning task. In this paper, we design the collaborative three-stream transformers to model the interactions of objects, and the actions/relations of objects between different modalities. Specifically, it is formed by three branches of transformers used to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain between videos and text, detected objects and text, and actions and text. Meanwhile, we design a cross-modality attention module to align the interactions modeled by the three branches of transformers. That is, an affinity matrix is computed to help align visual modalities by injecting the information from other interactions. In this way, the three branches of transformers can support each other to exploit the most discriminative semantic information in different modalities for accurate predictions of captions, especially for the subject, predicate, and object parts in a sentence. The whole model is trained in an end-to-end fashion. Extensive experiments conducted on two large-scale challenging datasets, i.e., YouCookII and ActivityNet Captions, demonstrate that the proposed method performs favorably against the state-of-the-art methods.
1 INTRODUCTION
Video captioning aims to generate natural language descriptions of video content, which attracts much attention in recent years along with the rapidly increasing amount of videos recorded in daily life. It helps blind or D/deaf people are able to enjoy the videos. However, as noted in (Xiong et al., 2018; Park et al., 2019; Lei et al., 2020), it is very challenging to generate natural paragraph descriptions due to the difficulties of having relevant, less redundant, and semantic coherent sentences.
Recently, researchers attempt to use the transformer model to solve the video captioning task (Vaswani et al., 2017; Dai et al., 2019; Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021), which relies on the self-attention mechanism to describe the interactions between different modalities of the input data, such as video, audio, and text. In practice, the aforementioned methods generally concatenate the features extracted from individual modalities, or use self-attention to model the interactions between extracted features. Although they advance the state-of-the-art of video captioning, it is still far from satisfactory in real applications due to the domain gap between different modalities. Then, a question arises, “How do we fill in the domain gap and capture the interactions among visual and linguistic modalities for video captioning?”
Before answering this question, let us see the basic grammar rules at first. Generally, a sentence (Krishna et al., 2017a; Zhou et al., 2018a) is presented as the following form, i.e.,
Women wear Arabian skirts on a stage.
Notably, Subject, Object, and Predicate are the three most critical elements in a sentence. They indicates the objects, the actions of objects, and the interactions among different objects. To improve the accuracy of the Subject, Object, and Predicate predictions in a sentence, we propose a new COllaborative three-Stream Transformers (COST) to model the visual-linguistic interactions of different granularities in spatio-temporal domain between different modalities. Specifically, the COST model is formed by three branches of transformers, including the Video-Text, Detection-Text, and
Action-Text transformers. The Video-Text transformer is used to model the interactions between the global video appearances and linguistic texts. The Detection-Text is used to model the interactions between the objects in individual video frames, which enforces the model to focus on the objects being aligned in the visual and linguistic modalities, i.e., indicating the Subjects and Objects in caption sentences. The Action-Text transformer is designed to model the actions/relations of objects between the visual and linguistic modalities, i.e., indicating the Predicate in caption sentences. Meanwhile, to align the interactions modeled by the three branches of transformers, we introduce a cross-modality attention model. In particular, an affinity matrix is computed to represent the relevance among visual modalities and help inject the information from other interactions. In this way, different branches of transformers support each other to exploit the most discriminative semantic information in different modalities, and enforce the model to pay more attention on generating the accurate Subject, Object and Predicate predictions. The whole model is trained in an end-to-end fashion using the Adam algorithm (Kingma & Ba, 2015).
Several experiments are conducted on two publicly challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a), to demonstrate the superior performance of the proposed method compared to the state-of-the-art methods (Zhou et al., 2018b; Dai et al., 2019; Park et al., 2019; Lei et al., 2020). Specifically, our COST method achieves the top CIDEr scores, i.e., 60.78% and 29.64%, on the YouCookII val set and the ActivityNet ae-test set, improving 3.54% and 1.45% compared to the state-of-the-arts.
The main contributions of this paper are summarized as follows. (1) We develop the new collaborative three-stream transformers to learn the interactions between the visual-linguistic modalities of different granularities in spatio-temporal domain, which enforces the model to generate the accurate Subject, Object, and Predicate predictions. (2) To align the interactions described in the three branches of transformers, we design a cross-modality attention model. (3) Extensive experiments conducted on two challenging datasets show that our method performs favorably against the stateof-the-art methods.
2 RELATED WORKS
Network architecture. Video captioning has received much attention in recent years. Most of previous video captioning methods (Yu et al., 2016; Pan et al., 2017; Zhang et al., 2018) attempt to perform sequence-to-sequence learning using the encoder-decoder paradigm (Chen et al., 2019). The convolutional neural networks (CNNs) (Zheng et al., 2020) and long short term memory networks (LSTM) (Pei et al., 2019; Park et al., 2019) are adopted to learn discriminative feature embeddings for accurate predictions. Recently, the transformer model dominates this field due to its superior performance compared with CNNs and LSTM. Zhou et al. (2018b) propose an end-to-end trained transformer model, where the encoder is designed to encode the video into semantic representations, and the proposal decoder is used to decode from the encoding with different anchors to form video event proposals. Sun et al. (2019) design the VideoBERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens. Similarly, Lu et al. (2019) extend the BERT architecture (Devlin et al., 2019) to a multi-modal two-stream model, which processes both visual and textual inputs separately but interactions are conducted between streams. Lei et al. (2020) develop the memory-augmented recurrent transformer, which uses a highly summarized memory state from the video clips and the sentence history to facilitate better prediction of the next sentence. Different from the aforementioned methods, our method attempts to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain across different modalities, with a focus on the most critical components in the sentence, i.e., subject, predicate and object.
Multi-modal cross-attention mechanism. The interactions between different modalities are critical for the video captioning task. Recent transformer based methods (Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021) use the cross-attention module to learn correlations across different modalities. For example, Iashin & Rahtu (2020) concatenate the learned embeddings from multiple modalities, e.g., video, audio and speech, for event description. Zhu & Yang (2020) propose the tangled transformer block to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered by exploiting the contextual information. Tang et al. (2021) use frame-level dense captions as an auxiliary text input for better video and language associations, where the constrained attention loss is used to forces
the model to automatically focus on the best matched caption from a pool of misalignment caption candidates. In contrast, we design the cross-modality attention module integrated in the collaborative three-stream transformers to align three types of visual-linguistic interactions, leading to more discriminative semantic cues for better caption generation.
Multi-modal pre-training models. Large-scale pre-training is another effective way to improve the accuracy of captioning models. Specifically, the jointly trained video and language models (Huang et al., 2020; Luo et al., 2020; Ging et al., 2020) on the large-scale datasets, such as YouTube-8M (Abu-El-Haija et al., 2016) and HowTo100M (Miech et al., 2019) with automatic speech recognition1 transcripts, provide discriminative features for downstream tasks. Huang et al. (2020) construct a dense video captioning dataset, i.e., Video Timeline Tags (ViTT), and explore several multi-modal sequence-to-sequence pre-raining strategies using transformers (Vaswani et al., 2017). Luo et al. (2020) also use transformers (Vaswani et al., 2017) with two single-modal encoders, a cross encoder, and a decoder. Recently, Ging et al. (2020) develop the Cooperative hierarchical Transformer (COOT) to model the interactions between different levels of granularity and modalities, which achieves superior results for video captioning.
3 OUR APPROACH
As discussed above, we design the collaborative three-stream transformers to model the interactions of objects, and actions/relations of objects between different modalities, which is formed by three branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text transformers. Specifically, the video and text inputs are firstly encoded to extract the multi-modal feature embeddings. After that, the embeddings are fed into the three-stream transformers to exploit the visual-linguistic interactions between videos and text of different granularities in spatio-temporal domain, i.e., global videos, detections, and actions. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers. The overall architecture of the proposed method is shown in Figure 1.
3.1 MULTI-MODALITY TOKENS
Three kinds of tokens, i.e., visual tokens, linguistic tokens, and special tokens, are used to encode the video and text inputs, which are described as follows. Visual tokens. For the visual tokens, we use three kinds of tokens with different granularities in spatio-temporal domain, that is the video tokens, the detection tokens, and the action tokens.
• Video tokens provide the global semantic information in the video sequence. In contrast to (Lei et al., 2020), we only use the appearance features extracted by Temporal Segment Networks (TSN) (Wang et al., 2016) (denoted as TSN-APP is Figure 1) as the video tokens, i.e., {fv1 , fv2 , · · · , fvNv}, where f v i is the extracted feature of the i-th video clip, and Nv is
the number of video clips. Notably, we can also leverage more powerful multi-modal feature extraction method COOT (Ging et al., 2020) to improve the performance, which is pre-trained on the large-scale HowTo100M dataset (Miech et al., 2019).
• Detection tokens are used to enforce the model to focus on the Subjects or Objects in caption sentences. Similar to (Park et al., 2019; Lu et al., 2019; Zhu & Yang, 2020), we use the Faster R-CNN method to detect the objects in each frame, which is pre-trained on the Visual Genome dataset (Krishna et al., 2017b). After that, the detection features in Faster R-CNN corresponding to the objects with the highest confidence scores in K categories2 are used to generate the detection tokens for each frame. We use {fd1 , fd2 , · · · , fdNd} to denote the set of detection tokens, where fdi is the i-th detection feature, and Nd is the total number of detections in the video sequence.
• Action tokens are designed to enforce the model to concentrate on the Predicates in caption sentences. Following (Lei et al., 2020), the optical flow features of video sequences are extracted by TSN (Wang et al., 2016) (denoted as TSN-MOT is Figure 1) to generate the
1https://developers.google.com/youtube/v3/docs/captions 2If the category number of the detected objects is less than K in a frame, we select the K detected objects
with the highest confidence scores regardless the object categories to generate the detection tokens.
action tokens, which are used to describe the actions/relations of objects. The action tokens are denoted as {fa1 , fa2 , · · · , faNa}, where f a i is the motion feature of the i-th video clip, and Na is the total number of action tokens.
Linguistic tokens. We break down the captions of video sequences into individual words and compute the corresponding linguistic tokens using the GloVe model (Pennington et al., 2014). The linguistic tokens are denoted as {f t1, f t2, · · · , f tNt}, where f t i is the extracted features of the i-th word using the GloVe model, and Nt is the total number of words.
Special tokens. Besides the aforementioned tokens, we also introduce two kinds of special tokens in transformer, similar to BERT (Devlin et al., 2019). The first one is the modality type token [CLS], which is added at the beginning of visual features to denote which modality the following tokens come from. The second one is the three kinds of separation token, i.e., [SEP], [BOS], and [EOS]. [SEP] is used at the end of the visual tokens to separate them from the linguistic tokens, [BOS] is used to denote the beginning of linguistic tokens, and [EOS] is used to denote the ending of the linguistic tokens, respectively. In addition, we use a fully-connected layer to encode the aforementioned tokens in the same dimension. Thus, the inputs for the three-stream transformer are computed as {
[CLS(·)], f (·)1 , f (·) 2 , · · · , f (·) N(·) , [SEP], [BOS], f t1, · · · , f tNt , [EOS] } , (1)
where (·) ∈ {v, d, a} indicates the video, detection, and action tokens, respectively. We also use the positional encoding strategy (Vaswani et al., 2017) in the Video-Text, Detection-Text, and ActionText transformers to describe the order information of caption sentences.
3.2 THREE-STREAM TRANSFORMERS
As shown in Figure 1, we feed the aforementioned tokens into the three-stream transformers. The Video-Text, Detection-Text, and Action-Text branches are formed by S basic blocks, and each block consists of a self-attention module and a cross-modality attention module. Both the self-attention and cross-modality modules are followed by a feed forward layer.
Self-attention module. The self-attention module is designed to model the visual-linguistic alignments in different branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text. Following (Vaswani et al., 2017), we compute the attention function between different tokens as follows.
A(Q,K ,V ) = softmax (QK T√
d
) V , (2)
where Q ∈ RN×d is the query matrix, K ∈ RN×d is the key matrix, V ∈ RN×d is the value matrix, and N and d are the number of tokens and the dimension of embeddings, respectively. We advocate h paralleled heads of scaled dot-product attentions to increase the diversity.
Cross-modality attention module. Besides the self-attention module, we use the cross-modality attention module to align the interactions modeled by the three branches of transformers. Specifically, we compute the affinity matrix to guide the alignments between different interactions by injecting the information from other branches of transformers. Given the feature embeddings H, X and Y from the Video-Text, Detection-Text, and Action-Text transformers, we first calculate the corresponding affinity matricesMH,MX , andMY using the dot-product operation followed by one FC layer and ReLU activation. Here, as shown in Figure 1, we take the affinity matrixMY ∈ RNa×(Nv+Nd) as an example, which is computed as
MY = softmax ( ReLU ( FCY ( Y ( ⊕ (H,X )T )))) , (3)
where indicates the dot product, and ⊕(·, ·) denotes the concatenation operation. We estimate the modality-wise normalization scores of the interactions using a softmax layer.MY(i, j) denotes the normalized interaction score between the i-th entity in the Action-Text embeddings and the j-th entity in the Video-Text and Detection-Text embeddings. Based on the matrix, the feature embeddings of the Action-Text transformer can inject information from other branches of transformers, i.e.,
Y ′ = FFN ( MY ( ⊕ (H,X ) )) , (4)
where indicates the dot product, and FFN denotes the feed forward layer. Notably, we apply the cross-modality attention in all blocks for each branch of transformers. In this way, we can align the captions with the video, detection and action entities to enhance the discriminative representation for video captioning. It is noteworthy that we leverage the video-text features in history to obtain the long-term sentence-level recurrence to generate the next sentences (Lei et al., 2020).
3.3 OPTIMIZATION
We use the multi-task loss to guide the training of our COST method, which is formed by three terms, i.e., Lv(·, ·) for the Video-Text transformer, Ld(·, ·) for the Detection-Text transformer, and La(·, ·) for the Action-Text transformer, i.e.,
L = Lv(`v, [CLSv]) + λd · Ld(`d, [CLSd]) + λa · La(`a, [CLSa]), (5)
where λd and λa are the preset parameters used to balance those three terms. Lv(·, ·) is the crossentropy loss used to penalize the errors of the predicted captions comparing to the ground-truth descriptions. The ground-truth of video tokens `v are the indexes of words in caption sentences. Ld(·, ·) is also the cross-entropy loss used to penalize the errors of the predicted categories of objects comparing to the pseudo category labels generated by the Faster R-CNN detector3. La(·, ·) is the multi-label classification loss used to handle multiple actions appearing in one video sequence. Specifically, we first aggregate all action tokens as the modality type token [CLSa] and then compute
3Notably, we do not use the annotated objects for model training, but use the pseudo category label `d generated by the Faster R-CNN detector as the ground-truth label to enforce the network to maintain the original encoded semantic information of detector.
the confidence score using a fully-connected layer, i.e., FC([CLSa]). Following (Zhang et al., 2021; Sun et al., 2020), the loss function is computed as
La(`a,CLSa) = log ( 1 + ∑ i∈Ωpos(`a) e−si ) + log ( 1 + ∑ j∈Ωneg(`a) esj ) , (6)
which penalizes the confidence scores si of the detected actions Ωpos(`a) in video sequences are less than the threshold 0 while the confidence scores sj of undetected actions Ωneg(`a) are larger than 0. The ground-truth action labels `a are the most common verbs in caption sentences, which are retrieved by the off-the-shelf part-of-speech tagger method (Sun et al., 2019).
4 EXPERIMENTS
4.1 DATASETS AND EVALUATION METRICS
Datasets. We conducted several experiments on two challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a). YouCookII includes 2, 000 long untrimmed videos describing 89 cooking recipes, where each video contains one reference paragraph and is further split into several event segments with annotated sentences. 1, 333 and 457 video sequences are used for training and validation, respectively. Meanwhile, ActivityNet Captions is a large-scale dataset formed by 10, 009 videos for training and 4, 917 videos for validation and testing. Notably, following (Zhou et al., 2019), the original validation set is split into the ae-val subset with 2, 460 videos for validation and the ae-test subset with 2, 457 videos for testing.
Evaluation metrics. Similar to (Park et al., 2019; Lei et al., 2020; Zhu & Yang, 2020), we use several standard metrics to evaluate our method, including BLEU@n (B@n) (Papineni et al., 2002) for n-gram precision, METEOR (M) (Denkowski & Lavie, 2014) for n-gram with synonym matching, CIDEr (C) (Vedantam et al., 2015) for tf -idf weighted n-gram similarity, and ROUGE (R@4) (Xiong et al., 2018; Park et al., 2019) for n-gram recall. Notably, two evaluation modes are considered, i.e., micro-level and paragraph-level. The micro-level evaluation reports the average score on all video sequences; while the paragraph-level evaluation first concatenates the caption sentences of all video sequences and then computes the scores averaged across all videos based on the groundtruth paragraph caption sentences. In both modes, CIDEr is used as the primary metric for ranking.
4.2 IMPLEMENTATION DETAILS
Our COST algorithm is implemented using PyTorch. The source code will be released after acceptance. All the experiments are conducted on a machine with 2 NVIDIA RTX-3090 GPUs. We train the model using the strategies similar to BERT (Devlin et al., 2019). Specifically, we use Adam (Kingma & Ba, 2015) with an initial learning rate of 1e − 4, β1=0.9, β2=0.999, L2 weight decay of 0.01, and the learning rate warmup over the first 2 epochs. We use the early-stop strategy in the model training phase. That is, we train the model at most 20 epochs with the batch size 64. For each branch of transformers, we set the dimension of the feature embeddings d = 768, the number
of transformer blocks S = 2, and the number of attention heads h = 12. The loss weights λa and λd in equation 5 are set to 2.0 and 0.02 empirically.
Considering the trade-off between accuracy and complexity, we extract 2048-dim video features, top K = 5 2048-dim detection features and 1024-dim action features from at most 100 frames uniformly sampled from the video sequences, i.e., Nv = Na = 102, Nd = 502 with the two special tokens [CLS] and [SEP]. Meanwhile, we exploit the first 20 words in the caption sentences and compute the 300-dim GloVe features, i.e., Nt = 22 with the two special tokens [BOS] and [EOS]. For the COOT features, we concatenate the local clip-level (384-dim) and the global video-level (768-dim) features to describe the videos. After the fully-connected layer, all the tokens are converted into the 768-dim features.
4.3 EVALUATION RESULTS
We compare the proposed COST method with the state-of-the-art methods on the two challenging datasets, i.e., YouCookII and ActivityNet Captions, in Table 1. As shown in Table 1, our method achieves the best results on the YouCookII val subset and the ActivityNet Captions ae-test subset. Notably, without using the COOT features, our method improves near 10% CIDEr score compared to the second best method, i.e., MART (Lei et al., 2020) on the YouCookII val subset. This is attributed to the proposed cross-modality attention module across the collaborative three-stream transformers. Besides, our method exploits the local appearance information from the Detection-Text transformer for better accuracy. Using the COOT features, the overall video captioning results are significantly improved, and our COST method also performs favorably against other algorithms by improving over 3% CIDEr score. We observe that the similar trend appears in the ActivityNet Captions ae-test subset. Although the ActivityNet Captions dataset is more challenging than the YouCookII dataset, our method improves the performance considerably compared to the state-of-the-art methods.
As presented in Table 2, we also compare the proposed method to the LSTM based methods with input detections on the ActivityNet Captions ae-val subset. Compared to AdvInf (Park et al., 2019), our method produces higher scores for both B@4 and CIDEr, demonstrating the superiority of the collaborative transformers to learn multi-model representations over LSTM. Meanwhile, MART (Lei et al., 2020) without input detections performs inferior than our method in terms of B@4, M
and C metrics. It indicates that the Detection-Text transformer in our network can describe the Subjects and Objects in caption sentences more accurately.
According to Table 3, our method outperforms several BERT based methods in terms of the microlevel evaluation. In particular, the second best ActBERT (Zhu & Yang, 2020) relies on the same visual modalities as our method. However, they directly focus on the learning the alignment between text and other visual modalities, which is difficult to exploit the discriminative semantic information. In contrast, our cross-modality attention module learns from three visual-linguistic interactions using the three-stream transformers, producing better CIDEr score. It is worth mentioning that they use BERT (Devlin et al., 2019) and S3D (Xie et al., 2017) to generate more powerful text and video features than that generated by GloVe (Pennington et al., 2014) and TSN (Wang et al., 2016) in our method.
Furthermore, the qualitative results of our COST method and MART (Lei et al., 2020) are shown in Figure 2. It can be seen that our COST generates more accurate captions than MART (Lei et al., 2020). This is attributed to two reasons. First, using the Detection-Text transformer, the Objects in caption sentences can be learned explicitly (e.g., shrimp, hand, and windows) or implicitly (e.g., from bowl to food processor, and from pan to wok). Second, the Action-Text transformer in our method can extract the key verbs in caption sentences such as add, blend, put and hold. MART (Lei et al., 2020) fails to recognize these key Subjects, Objects and Verbs. The results indicate that these two branches of transformers can enforce the network learn the key elements in the caption sentences.
4.4 ABLATION STUDY
To study the influence of different components in the proposed method, we conduct the ablation study on the YouCookII val subset, shown in Table 4. Notably, we use the appearance features extracted by TSN (Wang et al., 2016) in all COST-k variants, where k denotes the number of transformer branches retained in our COST method.
Effectiveness of multi-modal features. To verify the effectiveness of the multi-modal features, we construct three COST-1 variants, i.e., COST-1 (v), COST-1 (v+a) and COST-1 (v+a+d). In particular,
we only use the Video-Text transformer, but change the input features as the combinations of the GloVe text features and the concatenated features from video (v), action (a) and detection (d). As shown in Table 4, the scores under all metrics are improved considerably by integrating the action or detection features. Moreover, the CIDEr score is boosted from 29.20% to 38.63%, if we include all the three-modal features. It indicates that multi-modal features definitely facilitate to generate more accurate video captions.
Effectiveness of three-stream transformers. To demonstrate the effectiveness of the three-stream transformers compared to the simple feature concatenation used in COST-1, we construct three COST-k (k = 2, 3) variants, shown in Figure 1. It can be seen that the accuracy can be improved by using the three-stream transformers, i.e., the CIDEr score improved from 38.63% to 45.54%. Meanwhile, the accuracy is considerably improved by using more branches of transformers. This is because our cross-modality attention module is able to capture the most relevant semantic information from different visual-linguistic interactions. Compared to the COST-1 variants, our full model can obtain a more discriminative feature representations for video captioning.
Visualization of affinity matrix. To better understand the cross-modality attention module, we use the heapmap to visualize the affinity matrix MH ∈ RNv×(Nd+Na) of the Video-Text transformer in Figure 3. At epoch 1, all values in the affinity matrixMH are similar and the maximal value in MH is only 0.001. The corresponding caption results are noisy with the false predictions of nouns and verbs, e.g., pan instead of butter. It indicates that the affinity matrix is randomly initialized and interactions between different modalities are not aligned. After training for several epochs, a few entities dominate the affinity matrix with the maximal value of 0.3 (see the bright vertical lines in the heatmap). It indicates that our cross-modality attention module can successfully exploit the most relevant entities from other modalities to inject the information from other branches of transformers. In this way, the verb stir and the noun pan can be corrected to mix and butter for more accurate caption results.
5 CONCLUSION
In this paper, we propose the collaborative three-stream transformers to exploit the interactions of objects, and the actions/relations of objects between different modalities of different granularities in spatio-temporal domain. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers, which focuses on improving the prediction accuracies of the Subject, Object, and Predicate in the caption sentences. Several experiments conducted on the YouCookII and ActivityNet Captions datasets demonstrate the effectiveness of the proposed method. | 1. What is the main contribution of the paper in terms of its novel architecture design?
2. What are the strengths of the proposed approach, particularly regarding its separate streams and cross-modality attention module?
3. Do you have any concerns about the choice of using three separate transformer streams and the computational cost associated with it?
4. How does the reviewer assess the effectiveness of the proposed method compared to previous transformer-based approaches?
5. Are there any limitations or potential drawbacks of the proposed method that the author should address?
6. Would including human evaluation or ablation studies enhance the quality of the paper?
7. Should the author consider adding more datasets, such as MSVD or MSR-VTT, to further validate their approach? | Summary Of The Paper
Review | Summary Of The Paper
This work proposes a three streams transformer based architecture to show the interaction between subjects and their relationships to perform video captioning. To demonstrate the proposed approach, authors show and compare experimental results with SOTA methods on YouCookII and ActivityNet Captions dataset.
Review
This paper describe a reasonable clear history in the introduction section and the story fully support the experimental results. The model descriptions are mostly clear enough for reproducibility. The reviewer thinks this paper is of certain value to the field of video captioning.
** The strengths of this paper are summarized as follows:
In NLP, a sentence can be divided into three parts: subject, predicate, and object. According to this observation, this paper proposes a separate stream for each part, which seems interesting to me.
To combine/interact features from different transformer streams they propose cross-modality attention module which basically compute a affine matrix to maintain visual alignment.
Most of the model design is reasonable and with care, which is important for application papers.
They conduct reasonable experiments to show the effectiveness of their propose method.
** The weaknesses of the paper are as follows:
In this paper, authors use three separate transformer streams and to interact between those streams, they use affine matrix. Previous transformer based approaches (e.g. [1], [2]) use co-attentional module to guide one modality using other modality. What's the reason for not using the same concept?
I think video token and action token are capturing similar information using TSN. According to the paper, video tokens are extracting temporal appearance information from the video clips where action tokens are collecting motion information using optical flow. In that sense, action token collects redundant information. What's the benefit of using these two separate streams?
We know that the computational cost of transformer based model is quite high. Here authors use three transformer streams. What's the computational cost of the model? Is it much more than CNN based or single stream model?
Do you have any failure cases as qualitative results, where SOTA performs better than the proposed method.
It would be great if author includes human evaluation between the proposed method and SOTA method.
To align interactions between different modalities authors compute affine matrix. Is this approach is better than other fusion approaches? Author should include ablation for this contribution.
Is there any limitation of the proposed method?
The Microsoft Video Description (MSVD) dataset and the Microsoft Research Video-to-Text (MSR- VTT) dataset are the most common dataset for video captioning. Author should consider at least one of them.
[1] Yu, Zhou, et al. "Deep modular co-attention networks for visual question answering." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. [2] Li, Guang, et al. "Entangled transformer for image captioning." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019. |
ICLR | Title
Collaborative Three-Stream Transformers for Video Captioning
Abstract
As the most critical components in a sentence, subject, predicate, and object require special attention in the video captioning task. In this paper, we design the collaborative three-stream transformers to model the interactions of objects, and the actions/relations of objects between different modalities. Specifically, it is formed by three branches of transformers used to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain between videos and text, detected objects and text, and actions and text. Meanwhile, we design a cross-modality attention module to align the interactions modeled by the three branches of transformers. That is, an affinity matrix is computed to help align visual modalities by injecting the information from other interactions. In this way, the three branches of transformers can support each other to exploit the most discriminative semantic information in different modalities for accurate predictions of captions, especially for the subject, predicate, and object parts in a sentence. The whole model is trained in an end-to-end fashion. Extensive experiments conducted on two large-scale challenging datasets, i.e., YouCookII and ActivityNet Captions, demonstrate that the proposed method performs favorably against the state-of-the-art methods.
1 INTRODUCTION
Video captioning aims to generate natural language descriptions of video content, which attracts much attention in recent years along with the rapidly increasing amount of videos recorded in daily life. It helps blind or D/deaf people are able to enjoy the videos. However, as noted in (Xiong et al., 2018; Park et al., 2019; Lei et al., 2020), it is very challenging to generate natural paragraph descriptions due to the difficulties of having relevant, less redundant, and semantic coherent sentences.
Recently, researchers attempt to use the transformer model to solve the video captioning task (Vaswani et al., 2017; Dai et al., 2019; Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021), which relies on the self-attention mechanism to describe the interactions between different modalities of the input data, such as video, audio, and text. In practice, the aforementioned methods generally concatenate the features extracted from individual modalities, or use self-attention to model the interactions between extracted features. Although they advance the state-of-the-art of video captioning, it is still far from satisfactory in real applications due to the domain gap between different modalities. Then, a question arises, “How do we fill in the domain gap and capture the interactions among visual and linguistic modalities for video captioning?”
Before answering this question, let us see the basic grammar rules at first. Generally, a sentence (Krishna et al., 2017a; Zhou et al., 2018a) is presented as the following form, i.e.,
Women wear Arabian skirts on a stage.
Notably, Subject, Object, and Predicate are the three most critical elements in a sentence. They indicates the objects, the actions of objects, and the interactions among different objects. To improve the accuracy of the Subject, Object, and Predicate predictions in a sentence, we propose a new COllaborative three-Stream Transformers (COST) to model the visual-linguistic interactions of different granularities in spatio-temporal domain between different modalities. Specifically, the COST model is formed by three branches of transformers, including the Video-Text, Detection-Text, and
Action-Text transformers. The Video-Text transformer is used to model the interactions between the global video appearances and linguistic texts. The Detection-Text is used to model the interactions between the objects in individual video frames, which enforces the model to focus on the objects being aligned in the visual and linguistic modalities, i.e., indicating the Subjects and Objects in caption sentences. The Action-Text transformer is designed to model the actions/relations of objects between the visual and linguistic modalities, i.e., indicating the Predicate in caption sentences. Meanwhile, to align the interactions modeled by the three branches of transformers, we introduce a cross-modality attention model. In particular, an affinity matrix is computed to represent the relevance among visual modalities and help inject the information from other interactions. In this way, different branches of transformers support each other to exploit the most discriminative semantic information in different modalities, and enforce the model to pay more attention on generating the accurate Subject, Object and Predicate predictions. The whole model is trained in an end-to-end fashion using the Adam algorithm (Kingma & Ba, 2015).
Several experiments are conducted on two publicly challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a), to demonstrate the superior performance of the proposed method compared to the state-of-the-art methods (Zhou et al., 2018b; Dai et al., 2019; Park et al., 2019; Lei et al., 2020). Specifically, our COST method achieves the top CIDEr scores, i.e., 60.78% and 29.64%, on the YouCookII val set and the ActivityNet ae-test set, improving 3.54% and 1.45% compared to the state-of-the-arts.
The main contributions of this paper are summarized as follows. (1) We develop the new collaborative three-stream transformers to learn the interactions between the visual-linguistic modalities of different granularities in spatio-temporal domain, which enforces the model to generate the accurate Subject, Object, and Predicate predictions. (2) To align the interactions described in the three branches of transformers, we design a cross-modality attention model. (3) Extensive experiments conducted on two challenging datasets show that our method performs favorably against the stateof-the-art methods.
2 RELATED WORKS
Network architecture. Video captioning has received much attention in recent years. Most of previous video captioning methods (Yu et al., 2016; Pan et al., 2017; Zhang et al., 2018) attempt to perform sequence-to-sequence learning using the encoder-decoder paradigm (Chen et al., 2019). The convolutional neural networks (CNNs) (Zheng et al., 2020) and long short term memory networks (LSTM) (Pei et al., 2019; Park et al., 2019) are adopted to learn discriminative feature embeddings for accurate predictions. Recently, the transformer model dominates this field due to its superior performance compared with CNNs and LSTM. Zhou et al. (2018b) propose an end-to-end trained transformer model, where the encoder is designed to encode the video into semantic representations, and the proposal decoder is used to decode from the encoding with different anchors to form video event proposals. Sun et al. (2019) design the VideoBERT model to learn bidirectional joint distributions over sequences of visual and linguistic tokens. Similarly, Lu et al. (2019) extend the BERT architecture (Devlin et al., 2019) to a multi-modal two-stream model, which processes both visual and textual inputs separately but interactions are conducted between streams. Lei et al. (2020) develop the memory-augmented recurrent transformer, which uses a highly summarized memory state from the video clips and the sentence history to facilitate better prediction of the next sentence. Different from the aforementioned methods, our method attempts to exploit the visual-linguistic interactions of different granularities in spatio-temporal domain across different modalities, with a focus on the most critical components in the sentence, i.e., subject, predicate and object.
Multi-modal cross-attention mechanism. The interactions between different modalities are critical for the video captioning task. Recent transformer based methods (Iashin & Rahtu, 2020; Zhu & Yang, 2020; Tang et al., 2021) use the cross-attention module to learn correlations across different modalities. For example, Iashin & Rahtu (2020) concatenate the learned embeddings from multiple modalities, e.g., video, audio and speech, for event description. Zhu & Yang (2020) propose the tangled transformer block to encode three sources of information, i.e., global actions, local regional objects, and linguistic descriptions. Global-local correspondences are discovered by exploiting the contextual information. Tang et al. (2021) use frame-level dense captions as an auxiliary text input for better video and language associations, where the constrained attention loss is used to forces
the model to automatically focus on the best matched caption from a pool of misalignment caption candidates. In contrast, we design the cross-modality attention module integrated in the collaborative three-stream transformers to align three types of visual-linguistic interactions, leading to more discriminative semantic cues for better caption generation.
Multi-modal pre-training models. Large-scale pre-training is another effective way to improve the accuracy of captioning models. Specifically, the jointly trained video and language models (Huang et al., 2020; Luo et al., 2020; Ging et al., 2020) on the large-scale datasets, such as YouTube-8M (Abu-El-Haija et al., 2016) and HowTo100M (Miech et al., 2019) with automatic speech recognition1 transcripts, provide discriminative features for downstream tasks. Huang et al. (2020) construct a dense video captioning dataset, i.e., Video Timeline Tags (ViTT), and explore several multi-modal sequence-to-sequence pre-raining strategies using transformers (Vaswani et al., 2017). Luo et al. (2020) also use transformers (Vaswani et al., 2017) with two single-modal encoders, a cross encoder, and a decoder. Recently, Ging et al. (2020) develop the Cooperative hierarchical Transformer (COOT) to model the interactions between different levels of granularity and modalities, which achieves superior results for video captioning.
3 OUR APPROACH
As discussed above, we design the collaborative three-stream transformers to model the interactions of objects, and actions/relations of objects between different modalities, which is formed by three branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text transformers. Specifically, the video and text inputs are firstly encoded to extract the multi-modal feature embeddings. After that, the embeddings are fed into the three-stream transformers to exploit the visual-linguistic interactions between videos and text of different granularities in spatio-temporal domain, i.e., global videos, detections, and actions. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers. The overall architecture of the proposed method is shown in Figure 1.
3.1 MULTI-MODALITY TOKENS
Three kinds of tokens, i.e., visual tokens, linguistic tokens, and special tokens, are used to encode the video and text inputs, which are described as follows. Visual tokens. For the visual tokens, we use three kinds of tokens with different granularities in spatio-temporal domain, that is the video tokens, the detection tokens, and the action tokens.
• Video tokens provide the global semantic information in the video sequence. In contrast to (Lei et al., 2020), we only use the appearance features extracted by Temporal Segment Networks (TSN) (Wang et al., 2016) (denoted as TSN-APP is Figure 1) as the video tokens, i.e., {fv1 , fv2 , · · · , fvNv}, where f v i is the extracted feature of the i-th video clip, and Nv is
the number of video clips. Notably, we can also leverage more powerful multi-modal feature extraction method COOT (Ging et al., 2020) to improve the performance, which is pre-trained on the large-scale HowTo100M dataset (Miech et al., 2019).
• Detection tokens are used to enforce the model to focus on the Subjects or Objects in caption sentences. Similar to (Park et al., 2019; Lu et al., 2019; Zhu & Yang, 2020), we use the Faster R-CNN method to detect the objects in each frame, which is pre-trained on the Visual Genome dataset (Krishna et al., 2017b). After that, the detection features in Faster R-CNN corresponding to the objects with the highest confidence scores in K categories2 are used to generate the detection tokens for each frame. We use {fd1 , fd2 , · · · , fdNd} to denote the set of detection tokens, where fdi is the i-th detection feature, and Nd is the total number of detections in the video sequence.
• Action tokens are designed to enforce the model to concentrate on the Predicates in caption sentences. Following (Lei et al., 2020), the optical flow features of video sequences are extracted by TSN (Wang et al., 2016) (denoted as TSN-MOT is Figure 1) to generate the
1https://developers.google.com/youtube/v3/docs/captions 2If the category number of the detected objects is less than K in a frame, we select the K detected objects
with the highest confidence scores regardless the object categories to generate the detection tokens.
action tokens, which are used to describe the actions/relations of objects. The action tokens are denoted as {fa1 , fa2 , · · · , faNa}, where f a i is the motion feature of the i-th video clip, and Na is the total number of action tokens.
Linguistic tokens. We break down the captions of video sequences into individual words and compute the corresponding linguistic tokens using the GloVe model (Pennington et al., 2014). The linguistic tokens are denoted as {f t1, f t2, · · · , f tNt}, where f t i is the extracted features of the i-th word using the GloVe model, and Nt is the total number of words.
Special tokens. Besides the aforementioned tokens, we also introduce two kinds of special tokens in transformer, similar to BERT (Devlin et al., 2019). The first one is the modality type token [CLS], which is added at the beginning of visual features to denote which modality the following tokens come from. The second one is the three kinds of separation token, i.e., [SEP], [BOS], and [EOS]. [SEP] is used at the end of the visual tokens to separate them from the linguistic tokens, [BOS] is used to denote the beginning of linguistic tokens, and [EOS] is used to denote the ending of the linguistic tokens, respectively. In addition, we use a fully-connected layer to encode the aforementioned tokens in the same dimension. Thus, the inputs for the three-stream transformer are computed as {
[CLS(·)], f (·)1 , f (·) 2 , · · · , f (·) N(·) , [SEP], [BOS], f t1, · · · , f tNt , [EOS] } , (1)
where (·) ∈ {v, d, a} indicates the video, detection, and action tokens, respectively. We also use the positional encoding strategy (Vaswani et al., 2017) in the Video-Text, Detection-Text, and ActionText transformers to describe the order information of caption sentences.
3.2 THREE-STREAM TRANSFORMERS
As shown in Figure 1, we feed the aforementioned tokens into the three-stream transformers. The Video-Text, Detection-Text, and Action-Text branches are formed by S basic blocks, and each block consists of a self-attention module and a cross-modality attention module. Both the self-attention and cross-modality modules are followed by a feed forward layer.
Self-attention module. The self-attention module is designed to model the visual-linguistic alignments in different branches of transformers, i.e., Video-Text, Detection-Text, and Action-Text. Following (Vaswani et al., 2017), we compute the attention function between different tokens as follows.
A(Q,K ,V ) = softmax (QK T√
d
) V , (2)
where Q ∈ RN×d is the query matrix, K ∈ RN×d is the key matrix, V ∈ RN×d is the value matrix, and N and d are the number of tokens and the dimension of embeddings, respectively. We advocate h paralleled heads of scaled dot-product attentions to increase the diversity.
Cross-modality attention module. Besides the self-attention module, we use the cross-modality attention module to align the interactions modeled by the three branches of transformers. Specifically, we compute the affinity matrix to guide the alignments between different interactions by injecting the information from other branches of transformers. Given the feature embeddings H, X and Y from the Video-Text, Detection-Text, and Action-Text transformers, we first calculate the corresponding affinity matricesMH,MX , andMY using the dot-product operation followed by one FC layer and ReLU activation. Here, as shown in Figure 1, we take the affinity matrixMY ∈ RNa×(Nv+Nd) as an example, which is computed as
MY = softmax ( ReLU ( FCY ( Y ( ⊕ (H,X )T )))) , (3)
where indicates the dot product, and ⊕(·, ·) denotes the concatenation operation. We estimate the modality-wise normalization scores of the interactions using a softmax layer.MY(i, j) denotes the normalized interaction score between the i-th entity in the Action-Text embeddings and the j-th entity in the Video-Text and Detection-Text embeddings. Based on the matrix, the feature embeddings of the Action-Text transformer can inject information from other branches of transformers, i.e.,
Y ′ = FFN ( MY ( ⊕ (H,X ) )) , (4)
where indicates the dot product, and FFN denotes the feed forward layer. Notably, we apply the cross-modality attention in all blocks for each branch of transformers. In this way, we can align the captions with the video, detection and action entities to enhance the discriminative representation for video captioning. It is noteworthy that we leverage the video-text features in history to obtain the long-term sentence-level recurrence to generate the next sentences (Lei et al., 2020).
3.3 OPTIMIZATION
We use the multi-task loss to guide the training of our COST method, which is formed by three terms, i.e., Lv(·, ·) for the Video-Text transformer, Ld(·, ·) for the Detection-Text transformer, and La(·, ·) for the Action-Text transformer, i.e.,
L = Lv(`v, [CLSv]) + λd · Ld(`d, [CLSd]) + λa · La(`a, [CLSa]), (5)
where λd and λa are the preset parameters used to balance those three terms. Lv(·, ·) is the crossentropy loss used to penalize the errors of the predicted captions comparing to the ground-truth descriptions. The ground-truth of video tokens `v are the indexes of words in caption sentences. Ld(·, ·) is also the cross-entropy loss used to penalize the errors of the predicted categories of objects comparing to the pseudo category labels generated by the Faster R-CNN detector3. La(·, ·) is the multi-label classification loss used to handle multiple actions appearing in one video sequence. Specifically, we first aggregate all action tokens as the modality type token [CLSa] and then compute
3Notably, we do not use the annotated objects for model training, but use the pseudo category label `d generated by the Faster R-CNN detector as the ground-truth label to enforce the network to maintain the original encoded semantic information of detector.
the confidence score using a fully-connected layer, i.e., FC([CLSa]). Following (Zhang et al., 2021; Sun et al., 2020), the loss function is computed as
La(`a,CLSa) = log ( 1 + ∑ i∈Ωpos(`a) e−si ) + log ( 1 + ∑ j∈Ωneg(`a) esj ) , (6)
which penalizes the confidence scores si of the detected actions Ωpos(`a) in video sequences are less than the threshold 0 while the confidence scores sj of undetected actions Ωneg(`a) are larger than 0. The ground-truth action labels `a are the most common verbs in caption sentences, which are retrieved by the off-the-shelf part-of-speech tagger method (Sun et al., 2019).
4 EXPERIMENTS
4.1 DATASETS AND EVALUATION METRICS
Datasets. We conducted several experiments on two challenging datasets, i.e., YouCookII (Zhou et al., 2018a) and ActivityNet Captions (Krishna et al., 2017a). YouCookII includes 2, 000 long untrimmed videos describing 89 cooking recipes, where each video contains one reference paragraph and is further split into several event segments with annotated sentences. 1, 333 and 457 video sequences are used for training and validation, respectively. Meanwhile, ActivityNet Captions is a large-scale dataset formed by 10, 009 videos for training and 4, 917 videos for validation and testing. Notably, following (Zhou et al., 2019), the original validation set is split into the ae-val subset with 2, 460 videos for validation and the ae-test subset with 2, 457 videos for testing.
Evaluation metrics. Similar to (Park et al., 2019; Lei et al., 2020; Zhu & Yang, 2020), we use several standard metrics to evaluate our method, including BLEU@n (B@n) (Papineni et al., 2002) for n-gram precision, METEOR (M) (Denkowski & Lavie, 2014) for n-gram with synonym matching, CIDEr (C) (Vedantam et al., 2015) for tf -idf weighted n-gram similarity, and ROUGE (R@4) (Xiong et al., 2018; Park et al., 2019) for n-gram recall. Notably, two evaluation modes are considered, i.e., micro-level and paragraph-level. The micro-level evaluation reports the average score on all video sequences; while the paragraph-level evaluation first concatenates the caption sentences of all video sequences and then computes the scores averaged across all videos based on the groundtruth paragraph caption sentences. In both modes, CIDEr is used as the primary metric for ranking.
4.2 IMPLEMENTATION DETAILS
Our COST algorithm is implemented using PyTorch. The source code will be released after acceptance. All the experiments are conducted on a machine with 2 NVIDIA RTX-3090 GPUs. We train the model using the strategies similar to BERT (Devlin et al., 2019). Specifically, we use Adam (Kingma & Ba, 2015) with an initial learning rate of 1e − 4, β1=0.9, β2=0.999, L2 weight decay of 0.01, and the learning rate warmup over the first 2 epochs. We use the early-stop strategy in the model training phase. That is, we train the model at most 20 epochs with the batch size 64. For each branch of transformers, we set the dimension of the feature embeddings d = 768, the number
of transformer blocks S = 2, and the number of attention heads h = 12. The loss weights λa and λd in equation 5 are set to 2.0 and 0.02 empirically.
Considering the trade-off between accuracy and complexity, we extract 2048-dim video features, top K = 5 2048-dim detection features and 1024-dim action features from at most 100 frames uniformly sampled from the video sequences, i.e., Nv = Na = 102, Nd = 502 with the two special tokens [CLS] and [SEP]. Meanwhile, we exploit the first 20 words in the caption sentences and compute the 300-dim GloVe features, i.e., Nt = 22 with the two special tokens [BOS] and [EOS]. For the COOT features, we concatenate the local clip-level (384-dim) and the global video-level (768-dim) features to describe the videos. After the fully-connected layer, all the tokens are converted into the 768-dim features.
4.3 EVALUATION RESULTS
We compare the proposed COST method with the state-of-the-art methods on the two challenging datasets, i.e., YouCookII and ActivityNet Captions, in Table 1. As shown in Table 1, our method achieves the best results on the YouCookII val subset and the ActivityNet Captions ae-test subset. Notably, without using the COOT features, our method improves near 10% CIDEr score compared to the second best method, i.e., MART (Lei et al., 2020) on the YouCookII val subset. This is attributed to the proposed cross-modality attention module across the collaborative three-stream transformers. Besides, our method exploits the local appearance information from the Detection-Text transformer for better accuracy. Using the COOT features, the overall video captioning results are significantly improved, and our COST method also performs favorably against other algorithms by improving over 3% CIDEr score. We observe that the similar trend appears in the ActivityNet Captions ae-test subset. Although the ActivityNet Captions dataset is more challenging than the YouCookII dataset, our method improves the performance considerably compared to the state-of-the-art methods.
As presented in Table 2, we also compare the proposed method to the LSTM based methods with input detections on the ActivityNet Captions ae-val subset. Compared to AdvInf (Park et al., 2019), our method produces higher scores for both B@4 and CIDEr, demonstrating the superiority of the collaborative transformers to learn multi-model representations over LSTM. Meanwhile, MART (Lei et al., 2020) without input detections performs inferior than our method in terms of B@4, M
and C metrics. It indicates that the Detection-Text transformer in our network can describe the Subjects and Objects in caption sentences more accurately.
According to Table 3, our method outperforms several BERT based methods in terms of the microlevel evaluation. In particular, the second best ActBERT (Zhu & Yang, 2020) relies on the same visual modalities as our method. However, they directly focus on the learning the alignment between text and other visual modalities, which is difficult to exploit the discriminative semantic information. In contrast, our cross-modality attention module learns from three visual-linguistic interactions using the three-stream transformers, producing better CIDEr score. It is worth mentioning that they use BERT (Devlin et al., 2019) and S3D (Xie et al., 2017) to generate more powerful text and video features than that generated by GloVe (Pennington et al., 2014) and TSN (Wang et al., 2016) in our method.
Furthermore, the qualitative results of our COST method and MART (Lei et al., 2020) are shown in Figure 2. It can be seen that our COST generates more accurate captions than MART (Lei et al., 2020). This is attributed to two reasons. First, using the Detection-Text transformer, the Objects in caption sentences can be learned explicitly (e.g., shrimp, hand, and windows) or implicitly (e.g., from bowl to food processor, and from pan to wok). Second, the Action-Text transformer in our method can extract the key verbs in caption sentences such as add, blend, put and hold. MART (Lei et al., 2020) fails to recognize these key Subjects, Objects and Verbs. The results indicate that these two branches of transformers can enforce the network learn the key elements in the caption sentences.
4.4 ABLATION STUDY
To study the influence of different components in the proposed method, we conduct the ablation study on the YouCookII val subset, shown in Table 4. Notably, we use the appearance features extracted by TSN (Wang et al., 2016) in all COST-k variants, where k denotes the number of transformer branches retained in our COST method.
Effectiveness of multi-modal features. To verify the effectiveness of the multi-modal features, we construct three COST-1 variants, i.e., COST-1 (v), COST-1 (v+a) and COST-1 (v+a+d). In particular,
we only use the Video-Text transformer, but change the input features as the combinations of the GloVe text features and the concatenated features from video (v), action (a) and detection (d). As shown in Table 4, the scores under all metrics are improved considerably by integrating the action or detection features. Moreover, the CIDEr score is boosted from 29.20% to 38.63%, if we include all the three-modal features. It indicates that multi-modal features definitely facilitate to generate more accurate video captions.
Effectiveness of three-stream transformers. To demonstrate the effectiveness of the three-stream transformers compared to the simple feature concatenation used in COST-1, we construct three COST-k (k = 2, 3) variants, shown in Figure 1. It can be seen that the accuracy can be improved by using the three-stream transformers, i.e., the CIDEr score improved from 38.63% to 45.54%. Meanwhile, the accuracy is considerably improved by using more branches of transformers. This is because our cross-modality attention module is able to capture the most relevant semantic information from different visual-linguistic interactions. Compared to the COST-1 variants, our full model can obtain a more discriminative feature representations for video captioning.
Visualization of affinity matrix. To better understand the cross-modality attention module, we use the heapmap to visualize the affinity matrix MH ∈ RNv×(Nd+Na) of the Video-Text transformer in Figure 3. At epoch 1, all values in the affinity matrixMH are similar and the maximal value in MH is only 0.001. The corresponding caption results are noisy with the false predictions of nouns and verbs, e.g., pan instead of butter. It indicates that the affinity matrix is randomly initialized and interactions between different modalities are not aligned. After training for several epochs, a few entities dominate the affinity matrix with the maximal value of 0.3 (see the bright vertical lines in the heatmap). It indicates that our cross-modality attention module can successfully exploit the most relevant entities from other modalities to inject the information from other branches of transformers. In this way, the verb stir and the noun pan can be corrected to mix and butter for more accurate caption results.
5 CONCLUSION
In this paper, we propose the collaborative three-stream transformers to exploit the interactions of objects, and the actions/relations of objects between different modalities of different granularities in spatio-temporal domain. Meanwhile, the cross-modality attention module is designed to align the interactions modeled by the three branches of transformers, which focuses on improving the prediction accuracies of the Subject, Object, and Predicate in the caption sentences. Several experiments conducted on the YouCookII and ActivityNet Captions datasets demonstrate the effectiveness of the proposed method. | 1. What is the focus and contribution of the paper on video captioning?
2. What are the strengths of the proposed approach, particularly in terms of capturing Video-Text, Detection-Text, and Action-Text information?
3. What are the weaknesses of the paper, especially regarding technical details and experimental comparisons?
4. Do you have any concerns about the effectiveness of the affinity matrix or the difference between video tokens and action tokens?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes an algorithm for video captioning, which consists of three stream transformers. The three streams are used to capture Video-Text, Detection-Text and Action-Text information. Cross-modality attention is designed to enforce the information exchange across different modalities. The paper presented experimental results on YouCookII and ActivityNet. Compared with the selected methods, the proposed method shows better performance.
Review
While the overall idea of the paper seems to be straightforward, many technical details need to be further clarified. The following are my detailed comments about the paper:
1, The paper claims that the affinity matrix is a major contribution. It is unclear to me how effective the affinity matrix is for improving the final results of video captioning. Figure 3 provides some visualizations, but it will be more helpful if the paper can provide an apple-to-apple comparison of with vs. without the affinity matrix on the final performance.
2, What is the difference between video tokens and action tokens? One is using RGB, and the other is using optical flow?
3, From the results in Table 1, with or without COOT seems to make a big difference in the final results. This indicates the importance of using large-scale datasets for pretraining, as COOT is pre-trained with HowTo100M. The paper should also compare results with or without pre-training on the Visual Genome dataset for the detection tokens. It is unclear to me whether the proposed method is working, or simply because the paper uses more large-scale datasets for pre-training. The paper should list what datasets are used for pre-training when comparing different approaches.
4, The cross-modality attention in Figure 1 is a little confusing to me. It doesn’t directly correspond to Eq.(3) and (4). The paper should further improve Figure 1 to avoid any misunderstanding.
5, From Figure 1, word embedding N_{t} is concatenated with N_{a}, N_{v} and N_{t}. But from Eq.(3) and Eq.(4), N_{t} is gone. Where is N_{t} in Eq.(3) and (4)? Is it used for computing the affinity matrix?
6, The loss function in Eq.(5) requires the ground truth for both action and object detection besides video captioning. Do the other papers also use the supervision information from action and object detection to train their video captioning algorithms? Please clarify so that we understand what annotation information is used for training. |
ICLR | Title
Manifold Alignment via Feature Correspondence
Abstract
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
N/A
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
1 INTRODUCTION
In biology and other natural science settings we often have the problem that data are measured from the same system but with different sensors or in different days where sensors are calibrated differently. This is often termed batch effect in biology and can include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted. In such settings it is important to globally and locally align the datasets such that they can be combined for effective further analysis. Otherwise, measurement artifacts may dominate downstream analysis. For instance, clustering the data will group samples by measurement time or sensor used rather than by biological or meaningful differences between datapoints.
Recent works regard the two datasets as views of the same system and construct a multiview diffusion geometry but all of them require at least partial bijection, if not full one, between views (Lafon et al., 2006; Ham et al., 2003; 2005; Wang & Mahadevan, 2008; Tuia & Camps-Valls, 2016). Other work directly attempt to match data points directly in ambient space, or by local data geometry and these can be very sensitive to differences in sampling density rather than data geometry (Haghverdi et al., 2018). Here, we present a principled approach called harmonic alignment to for correct this type of effect based on the manifold assumption.
The manifold assumption holds that high dimensional data originates from an intrinsically low dimensional smoothly varying space that is mapped via nonlinear functions to observable high dimensional measurements. Thus, we assume that the datasets are from transformed versions of the same low dimensional manifold. We learn the manifolds separately from the two datasets using diffusion geometric approaches and then find an isometric transformation to map from one manifold to the other. Note that we are not aligning points to points. Indeed there may be sampling differences and density differences in the data. However, our manifold learning approach uses an anisotropic kernel that detects the geometry of the data to align rather than point-by-point matching which is done in other methods.
Our method involves first embedding each dataset separately into diffusion components, and then finding an isometric transformation that aligns these diffusion representations. To find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics in the graph space. The diffusion components are eigenvectors of a Markov-normalized data diffusion operator, whose eigenvalues indicate frequency of the eigenvector. We attempt to find a transformation from one set of eigenvectors to another, via feature correspondences in the data.
While datapoint correspondences may be difficult or impossible to obtain since many biological measurements are destructive, feature correspondences are often available. For instance single-cell measurements of cells from the same device, thus containing counts for the same genes, albeit affected by batch differences. Thus when corresponding features are transformed via the graph Fourier transform (GFT) into diffusion coordinates, the representations should be similar, with potentially small frequency-proximal perturbations. For instance, slowly varying features across the manifold should be load to low-frequency eigenvectors of the Markov matrix. This insight allows us to create a correlation matrix between the eigenvectors of one dataset to another based on correlation between feature loadings to the eigenvectors. However, since we know that eigenvectors represent frequency harmonics, we need not compute the entire correlation matrix but rather only the near-diagonal values. This implies that the two manifolds must be perturbed such that low and high frequency eigenvector space are similar. We then find a linear transformation between that maps the eigenvectors of one space into those of the other that maximizes these correlations by orthogonalizing this matrix. This transformation allows us to align the two datasets with each other. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant batch effects and sample-specific artifacts and low-pass filter this geometry to denoise the unified manifold. Thus, in addition to aligning the manifolds our method denoises the manifolds as well.
We demonstrate the results of our method on artificial manifolds created from rotated MNIST digits, corrupted MNIST digits, as well as on single-cell biological data measuring peripheral blood cells. In each case our method successfully aligns the manifolds such that they have appropriate neighbors within and across the two datasets. We show an application to transfer learning where a lazy classifier trained on one dataset is applied to the other dataset after alignment. Further, comparisons with recently developed methods such as the MNN-based method of Haghverdi et al. (2018) show significant improvements in performance and denoising ability.
2 HARMONIC ALIGNMENT
A typical and effective assumption in machine learning is that high dimensional data originates from an intrinsic low dimensional manifold that is mapped via nonlinear functions to observable high dimensional measurements; this is commonly referred to as the manifold assumption. Formally, let Md be a hidden d dimensional manifold that is only observable via a collection of n d nonlinear functions f1, . . . , fn :Md → R that enable its immersion in a high dimensional ambient space as F (M) = {f(z) = (f1(z), . . . , fn(z))T : z ∈ Md} ⊆ Rn from which data is collected. Conversely, given a dataset X = {x1, . . . , xN} ⊂ Rn of high dimensional observations, manifold learning methods assume its data points originate from a sampling Z = {zi}Ni=1 ∈ Md of the underlying manifold via xi = f(zi), i = 1, . . . , n, and aim to learn a low dimensional intrinsic representation that approximates the manifold geometry ofMd. A popular approach towards this manifold learning task is to construct diffusion geometry from data (Coifman & Lafon, 2006), and embed data into diffusion coordinates, which provide a natural global coordinate system derived from Laplace operator over manifold geometries, as explained in Section 2.1. However, this approach, as well as other manifold learning ones, implicitly assumes that the feature functions {fj}nj=1 represent data collection technologies (e.g., sensors or markers) that operate in a consistent manner on all samples in Z. While this assumption may be valid in some fields, biological data collection (and more generally, data collection in empirical sciences) is often highly affected by a family of phenomena known as batch effects, which introduce nonnegligible variance between different data batches due to various uncontrollable factors. These include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted.
Therefore, in such settings, one should consider a collection of S samples {X(s)}Ss=1, each originating from feature functions {f (s)j }nj=1 that aim to measure the same quantities in the data, but are also affected by sample-dependent artifacts. While each sample can be analyzed to find its intrinsic structure, their union into a single dataset X = ⋃S s=1X
(s) often yields an incoherent geometry biased by batch effects, where neither the relations between samples or within each sample can be clearly seen. To address such artifacts, and constrct a unified geometry of multiple batches (i.e., samples or datasets) together, we propose to first embed each batch separately in diffusion coordinates, and then find an isometric transformation that aligns these diffusion representations.
In order to find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics, as shown in graph signal processing (Shuman et al., 2013). As explained in Section 2.2, this duality allows us to capture cross-batch relations between diffusion coordinates, and orthogonalize the resulting matrix to provide a map between batch-specific diffusion representations. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant to both batch effects and batch-specific artifacts. While our approach generalizes naturally to any number of batches, for simplicity, we focus our formulation here on the case of two batches.
2.1 DIFFUSION GEOMETRY & GRAPH HARMONICS
The first step in our approach is to capture the intrinsic geometry of each batch X(s) using the diffusion maps method from Coifman & Lafon (2006), which non-linearly embeds the data in a new coordinate system (i.e., diffusion coordinates) that is often considered as representing a data manifold or more generally a diffusion geometry over the data. The diffusion maps construction starts by considering local similarities defined via a kernel K(x, y), x, y ∈ X(s) that capture local neighborhoods in the data. We note that a popular choice for K is the Gaussian kernel e− ‖x−y‖2 σ , where σ > 0 is interpreted as a user-configurable neighborhood size. Next, these similarities are normalized to defined transition probabilities p(x, y) = K(x,y)‖K(x,·)‖1 that are organized in an N × N row stochastic matrix P that describes a Markovian diffusion process over the intrinsic geometry of the data. Finally, a diffusion map is defined by taking the eigenvalues 1 = µ1 ≥ µ2 ≥ · · · ≥ µN and (corresponding) eignevectors {φj}Nj=1 of P, and mapping each data point x ∈ X(s) to an N dimensional vector Φt(x) = [µt1φ1(x), . . . , µ t NφN (x)]
T , where t represents a diffusion-time (i.e., number of transitions considered in the diffusion process). In this work, we denote the diffusion map for the entire datasetX(S) as Φ(s)t . We refer the reader to Coifman & Lafon (2006) for further details and mathematical derivation, but note that in general, as t increases, most of the eigenvalue weights µtj , j = 1, . . . , N , become numerically negligible, and thus truncated diffusion map coordinates (i.e., using only nonnegligible ones) can be used for dimensionality reduction purposes.
Much work has been done in various fields on applications of diffusion maps as a whole, as well as individual diffusion coordinates (i.e., eigenvectors of P ), in data analysis (Farbman et al., 2010; Barkan et al., 2013; Mahmoudi & Sapiro, 2009; Angerer et al., 2015; Haghverdi et al., 2016). In particular, as discussed in Coifman & Lafon (2006) and Nadler et al. (2006), the diffusion coordinates are closely related to eigenvectors of Laplace operators on manifolds, as well as their discretizations as eigenvectors of graph Laplacians, which were studied previously, for example, in Belkin & Niyogi (2002). Indeed, the similarities measured in K can be considered as determining edge weights of a graph structure defined over the data. Formally, we define this graph by considering every data point in X as a vertex on the graph, and then defining weighted edges between them via an N × N adjacency matrix W with Wi,j = K(xi, xj), i, j = 1 . . . N . Then, the (normalized) graph Laplacian is defined as L = I−D−1/2WD−1/2 with D being a diagonal degree matrix (i.e., with Di,i = ∑N j=1 Wi,j). Finally, it is easy to see that L = I −D1/2PD−1/2, and thus it can be verified that the eigenvectors of L can be written as ψj = D1/2φj , with corresponding eigenvalues λj = 1− µj . It should be noted that if data is uniformly sampled from a manifold (as considered in Belkin & Niyogi, 2002), these two sets of eigenvectors coincide and the diffusion coordinates can be considered as Laplacian eigenvectors (or eigenfunctions, in continuous settings).
A central tenet of graph signal processing is that the Laplacian eigenfunctions {ψj}Nj=1 can be regarded as generalized Fourier harmonics (Shuman et al., 2013; 2016), i.e., graph harmonics. In-
Algorithm 1 Manifold Alignment via Feature Correspondence
Input: Datasets X = {X(1),X(2)} where X(s) has N (s) observations by d(s) features Output: Aligned graph Laplacian L(Y ).
1: for X(s) ∈ X do 2: Compute the anisotropic weight matrix W(s) (Section 2.2) and degree matrix D(s) 3: Construct the normalized graph Laplacian L(s) and its truncated eigensystem
Λ̄(s) = diag [ λ
(s) i ]N(s) i=2 , Ψ̄(s) = [ ψ (s) i ]N(s) i=1 , with L(s)ψ(s)i = λ (s) i ψ (s) i and Λ 0
4: Compute the diffusion map Φ(s) = e−tΛ̄ (s) Ψ̄(s) 5: The spectral domain wavelet transform tensor Ĥ(s)i,j,k ( Equation 1). 6: end for 7: Compute intraband harmonic correlations between each dataset M′:,:,k (Section 2.2). 8: Compute the total interband correlation M = ∑τ k=1 M ′ :,:,k. 9: Orthogonalize M via SVD, T = UV T
10: Construct the transformed matrix E =
[ Φ(1) e−tΛ̄ (1) Ψ̄(1)T
e−tΛ̄ (2) Ψ̄(2)TT Φ(2)
] .
11: Embed E using a Gaussian kernel to obtain L(Y ).
deed, a classic result in spectral graph theory shows that the discrete Fourier basis can be derived as Laplacian eigenvectors of the ring graphs (see, e.g. Olfati-Saber, 2007, Proposition 10). Based on this interpretation, a graph Fourier transform (GFT) is defined on graph signals (i.e., functions f : X(s) → R over the vertices of the graph) as f̂(λj) = 〈f, ψj〉, j = 1, . . . , N , similar to the definition of the classic discrete Fourier transform (DFT). Further, we can also write the GFT in terms of the diffusion coordinates as f̂(λk) = 〈f,D1/2φj〉, given their relation to graph harmonics. Therefore, up to appropriate weighting, the diffusion coordinates can conceptually be interpreted as intrinsic harmonics of the data, and conversely, the graph harmonics can be considered (conceptually) as intrinsic coordinates of data manifolds. In Section 2.2, we leverage this duality between coordinates and harmonics in order to capture relations between data manifolds of individual batches, and then them in Section 2.3 to align their intrinsic coordinates and construct a unified data manifold over them.
2.2 CROSS-GRAPH HARMONIC CORRELATION
We now turn our attention to considering the relation between two batches X(s1), X(s2) via their their intrinsic data manifold structure, as it is captured by diffusion coordinates or, equivalently, graph harmonics. We note that, as discussed extensively in Coifman & Lafon (2006), a naı̈ve construction of an intrinsic data graph with a Gaussian kernel (as described, for simplicity, in Section 2.1) may be severely distorted by density variations in the data. Such distortion would detrimental in our case, as it would the resulting diffusion geometry and its harmonic structure would not longer reflect a stable (i.e., batch-invariant) intrinsic “shape” of the data. Therefore, we follow the normalization suggested in there to separate data geometry from density, and define a graph structure (i.e., adjacency matrix) over each batch via an anistotropic kernel given by
W (s) i,j =
K(x(s)i , x (s) j )
‖K(x(s)i , ·)‖ 1/2 1 ‖K(x (s) j , ·)‖ 1/2 1
, i, j = 1, . . . , N (s), s ∈ {s1, s2},
where K is the previously defined Gaussian kernel. This graph structure is then used, as previously described, to construct the intrinsic harmonic structure given by {ψ(s)j }Nj=1 on each batch. While the intrinsic geometry constructed by our graph structures should describe similar “shapes” for the two datasets, there is no guarantee that their computed intrinsic coordinates will match. Indeed, it is easy to see how various permutations of these coordinates can be obtained if some eigenvalues have multiplicities greater than one (i.e., their monotonic order is no longer deterministic), but even beyond that, in practical settings batch effects often result in various misalignments
(e.g., rotations or other affine transformations) between derived intrinsic coordinates. Therefore, to properly recover relations between multiple batches, we aim to quantify relations between their coordinates, or more accurately, between their graph harmonics.
We note that if we even a partial overlap between data points in the two batches, this task would be trivially enabled by taking correlations between these harmonics. However, given that here we assume a setting without such predetermined overlap, we have to rely on other properties that are independent of individual data points. To this end, we now consider the feature functions {f (s)j }nj=1 and our initial assumption that corresponding functions aim to measure equivalent quantities in the batches (or datasets). Therefore, while they may differ in the original raw form, we expect their expression over the intrinsic structure of the data (e.g., as captured by GFT coefficients) to correlate, at least partially, between batches. Therefore, we use this property to compute crossbatch correlations between graph harmonics based on the GFT of corresponding data features. To formulate this, it is convenient to extend the definition of the GFT from functions (or vectors) to matrices, by slight abuse of the inner product notation, as X̂(s) = 〈X,Ψ(s)〉 = [Ψ(s)]TX(s), where X consists of data features as columns and Ψ(s) has graph harmonics as columns (both with rows representing data points).
Notice that the resulting Fourier matrix X̂(s), for each batch, no longer depends on individual data points, and instead it expresses the graph harmonics in terms of data features. Therefore, we can now use this matrix to formulate a cross-batch harmonic correlations by considering inner products between rows of these matrices. Further, we need not consider all ther correlations between graph harmonics, since we also have access to their corresponding frequency information, expressed via the associated Laplacian eigenvalues {λ(s)j }N (s)
j=1 . Therefore, instead of computing correlations between every pair of harmonics across batches, we only consider them within local frequency bands, defined via appropriate graph filters, as exlpained in the following.
Let g(t) be a smooth window defined on the interval [−0.5, 0.5] as g(t) = sin ( 0.5π cos (πt) 2 )
. Then, by translating this window along the along the real line, we obtain τ equally spaced wavelet windows that can be applied to the eigenvalues λ(s)j in order to smoothly partition the spectrum of each data graph. This construction is known as the itersine filter bank, which can be shown to be a tight frame (Perraudin et al., 2014). The resulting windows gξi(λ) are centered at frequencies Ξ = {ξ1, . . . , ξτ}. The generating function for these wavelets ensures that each gξi halfway overlaps with gξi+1 . This property implies that there are smooth transitions between the weights of consecutive frequency bands. Furthermore, as a tight frame, this filterbank has the property that∑τ i=1 hξi(λ) = 1 for any eigenvalue. This choice ensures that any filtering we do using the filter bank G = {hξi}τi=1(λ) will behave uniformly across the spectrum. Together, these two properties imply that cross-batch correlations between harmonics within and between bands across the respective batch spectra will be robust. To obtain such bandlimited correlations we construct the following filterbank tensor
Ĥ (s) i,j,k = gξk(λ (s) i )ψ (s)T i X (s) :,j for 2 ≤ i ≤ N (s). (1)
Each Ĥ(s)(·,·,k) of this matrix corresponds to the Fourier matrix X̂ (s) with rows scaled by gξk . Then, we use these filterbank tensors to compute bandlimited correlations via M′(·,·,k) = Ĥ (s1) :,:,kĤ (s2)T (·,·,k),
and finally merge these to generate a combined matrix M(s1,s2) = ∑τ k=1 M ′ (·,·,k), which we refer to as the harmonic (cross-batch) correlation matrix. This step, when combined with the half-overlaps discussed above, allows flexibility in aligning harmonics across bands, which is demonstrated in practice in Section 2.3.
2.3 ISOMETRIC ALIGNMENT
Given the harmonic correlation matrix M(s1,s2), we now define an isometric transformation between the intrinsic coordinate systems of the two data manifolds. Such transformation ensures our alignment fits the two coordinate systems together without breaking the rigid structure of each batch, thus preserving their intrinsic structure. To formulate such transformation, we recall that isometric transformations are given by orthogonal matrices, and thus we can rephrase our task as finding the best approximation of M(s1,s2) by an orthogonal matrix. Such approximation is a well studied problem,
dating back to Schönemann (1966), which showed that it can be obtained directly the singular value decomposition M = USV T by taking T(s1,s2) = UV T .
Finally, given the isometric transformation defined by T(s1,s2), we can now align of the data manifolds of two batches, and define a unified intrinsic coordinate system for the entire data. While such alignment could equivalently be phrased in terms of diffusion coordinates {φ(s)j }N (s) j=1 or harmonic coordinates {ψj}N(s)j=1 , we opt here for the latter, as it relates more directly to the computed harmonic correlations. Therefore, we construct the transformed embedding matrix E as
E =
[ Ψ̄(s1) [Ψ̄(s1)] T
Ψ̄(s2) TT Ψ̄(s2)
] exp ( −t [ Λ̄(s1) 0
0 Λ̄(s2)
]) . (2)
where we drop the superscript for T, as they are clear from context, Λ̄(s) are diagonal matrices that consists of the nonzero Laplacian eigenvalues of each view, and Ψ̄ consist of the corresponding eigenvectors (i.e., harmonics) as its columns. We note that the truncated of zero eigenvalues correspond to zero frequencies (i.e., flat constant harmonics), and therefore they only encode global shifts that we anyway aim to remove in the alignment process. Accordingly, this truncation is also applied to the harmonic correlation matrix M(s1,s2) prior to its orthogonalization. Finally, we note that this construction is equivalent to the diffusion map, albeit using a slightly different derivation of a discretized heat kernel (popular, for example, in graph signal processing works such as (Shuman et al., 2016)), with the parameter t again serving an analogous purpose to diffusion time.
3 EMPIRICAL RESULTS
3.1 ROTATIONAL ALIGNMENT
As a proof of principle we first demonstrate harmonic alignment of two circular manifolds. To generate these manifolds, we rotated two different MNIST examples of the digit ’3’ 360 degrees and sampled a point for each degree (See figure 1a). As we noted in section 2.2, the manifold coordinates obtained by diffusion maps are invariant to the phase of the data. In this example it is clear that each ’3’ manifold is out of phase with the other.
Figure 1b demonstrates the simple rotation that is learned by harmonic alignment between the two embeddings. On the left side, we see the out-of-phase embeddings. Taking nearest neighbors in this space illustrates the disconnection between the embeddings: nearest neighbors are only formed for within-sample points. After alignment, however, we see that the embeddings are in phase with each other because nearest neighbors in the aligned space span both samples and are in the same orientation with each other.
3.2 FEATURE CORRUPTION
Next, we assessed the ability of harmonic alignment in recovering k-neighborhoods after random feature corruption (figure 2).
To do this, we drew random samples from MNIST X(1) and X(2) of N (1) = N (2) = 1000. Then, for each trial in this experiment we drew 7842 samples from a unit normal distribution to create a 784 × 784 random matrix. We orthogonalized this matrix to yield the corruption matrix O0. To vary the amount of feature corruption, we then randomly substituted b0.01 ∗ p ∗ 784c columns from I to make Op. Right multiplication of X(2) by this matrix yielded corrupted images with only p% preserved pixels (figure 2b, ’corrupted’). To assess the recovery of k-neighborhoods, we then performed a lazy classification on X(2)Op by only using the labels of its neighbors in X(1). The results of this experiment, performed for p = {0, 5, 10, . . . 95, 100} are reported in figure 2a. For robustness, at each p we sampled three different non-overlapping pairs X(1) and X(2) and for each pair we sampled three Op matrices each with random identity columns, for a total of nine trials per p.
In general, unaligned, mutual nearest neighbors (MNN), and harmonic alignment with any filter set cannot recover k-neighborhoods under total corruption; 10% serves as a baseline accuracy that results from the rotation having 10% overlap in the manifold space. On the other hand, for small filter
choices we observe that harmonic alignment quickly recovers ≥ 80% accuracy and outperforms MNN and unaligned classifications consistently except under high correspondence.
Next we examined the ability of harmonic alignment to reconstruct corrupted data (figure 2b). We performed the same corruption procedure as before with p = 25 and selected 10 examples of each digit in MNIST. We show the ground truth fromX(2) and the corrupted resultX(2)O25 in figure 2b. Then, a reconstruction was performed by setting each pixel in a new image to the dominant class average of the ten nearest X(1) neighbors. In the unaligned case we see that most examples turn into smeared fives or ones; this is likely the intersection formed by X(1) and X(2)O25 that accounts for the 10% baseline accuracy in figure 2a. On the other hand, the reconstructions produced by harmonic alignment resemble their original input examples.
3.3 COMPARISONS
In figure 3 we compare the runtime, k-nn accuracy, and transfer learning capabilities of our method with two other contemporary alignment methods. First, we examine the unsupervised algorithm proposed by Wang & Mahadevan (2009) for generating weight matrices between two different samples. The algorithm first creates a local distance matrix of size k around each point and its four nearest neighbors. Then it computes an optimal match between k-nn distance matrices of each pair of points in X(1) and X(2) by comparing all k! permutations of the k-nn matrices and computing the minimal frobenius norm between such permuted matrices. We report runtime results for k = 5, as k = 10 failed to complete after running for 8 hours. Because the Wang&Mahadevan (2009) method merely gives a weight matrix that can be used with separate algorithms for computing the final features, we report accuracy results using their implementation. Regardless of input size, we were unable to recover k-neighborhoods for datasets with 35% uncorrupted columns (figure 3b) despite the method’s computational cost (figure 3a).
A more scalable approach to manifold alignment has emerged recently in the computational biology (Haghverdi et al., 2018) literature. This approach uses mutual nearest neighbor (MNN) matching to
compute mapping between datasets based on the assumption that if two points are truly neighbors they will resemble each other in both datasets. Because this approach amounts to building a knearest neighbors graph for each dataset and then choosing a set of neighbors between each dataset, MNN scales comparably to our method (figure 3a. Additionally, MNN is able to recover 20-30% of k-neighborhoods when only 35% of features match (figure 3b); this is an improvement over Wang but is substantially lower than what harmonic alignment achieves. We note that the performance of harmonic alignment was correlated with input size whereas MNN did not improve with more points.
3.4 TRANSFER LEARNING
An interesting use of manifold alignment algorithms is transfer learning. In this setting, an algorithm is trained to perform well on one dataset, and the goal is to extend the algorithm to the other dataset after alignment. In figure 3c we explore the utility of harmonic alignment in transfer learning and compare it to MNN and the method proposed by Wang & Mahadevan (2009).
In this experiment, we first draw 1000 uncorrupted examples of MNIST digits, and construct a diffusion map to use as our training set. Next, we took 65% corrupted unlabeled points (see section 3.2) in batches of 1, 000, 2, 000, 4, 000, 8, 000 as a test set to perform lazy classification on using the labels from the uncorrupted examples. At 1:8 test:training sample sizes, Harmonic alignment outperformed Wang and MNN by aligning upto 60% correct k-neighborhoods. In addition to showing the use of manifold alignment in transfer learning, this demonstrates the robustness of our algorithm to dataset imbalances.
3.5 BIOLOGICAL DATA
To illustrate the need for robust manifold alignment in computational biology, we turn to a simple real-world example obtained from Amodio et al. (2018) (figure 4). This dataset was collected by mass cytometry (CyTOF) of peripheral blood mononuclear cells (PBMC) from patients who contracted dengue fever. Subsequently, the Montgomery lab at Yale University experimentally introduced these PBMCs to Zika virus strains.
The canonical response to dengue infection is upregulation of interferon gamma (IFNγ) (Chesler & Reiss, 2002; Chakravarti & Kumaria, 2006; Braga et al., 2001). During early immune response, IFNγ works in tandem with acute phase cytokines such as tumor necrosis factor α to induce febrile response and inhibit viral replication (Ohmori et al., 1997). In the PBMC dataset, we thus expect to see upregulation of these two cytokines together, which we explore in 4.
In figure 4a, we show the relationship between IFNγ and TNFα without denoising. Note that there is a substantial difference between the IFNγ distributions of sample 1 and sample 2 (Earth Mover’s Distance (EMD) = 2.699). In order to identify meaningful relationships in CyTOF data, it is common to denoise it first. We used a graph filter to denoise the cytokine data (Van Dijk et al., 2018). The results of this denoising are shown in figure 4b. This procedure introduced more technical artifacts by enhancing the difference between batches, as seen by the increased EMD (3.127) between the IFNγ distributions of both patients. This is likely due to a substantial connectivity difference between the two batch subgraphs in the total graph.
Next, we performed harmonic alignment of the two patient profiles. We show the results of this in figure 4c. Harmonic alignment corrected the difference between IFNγ distributions and restored the canonical correlation of IFNγ and TNFα. This example illustrates the utlity of harmonic alignment for biological data, where it can be used for integrated analysis of data collected across different experiments, patients, and time points.
4 CONCLUSION
We presented a novel method for aligning or batch-normalizing two datasets that involves learning and aligning their intrinsic manifold dimensions. Our method leverages the fact that common or corresponding features in the two datasets should have similar harmonics on the graph of the data. Our harmonic alignment method finds an isometric transformation that maximizes the similarity of frequency harmonics of common features. Results show that our method successfully aligns artifi-
cially misaligned as well as biological data containing batch effect. Our method has the advantages that it aligns manifold geometry and not density (and thus is insensitive to sampling differences in data) and further our method denoises the datasets to obtain alignments of significant manifold dimensions rather than noise. Future applications of harmonic alignment can include integration of data from different measurement types performed on the same system, where features have known correlations. | 1. How does the proposed method address the issue of batch effects in biology and natural science measurements?
2. What is the novelty of the proposed approach compared to existing methods for aligning data batches?
3. How does the choice of diffusion kernel method and the assumption of isometric rotation help in aligning the data batches?
4. What are the potential limitations or weaknesses of the proposed method, particularly regarding spectral methods and their sensitivity to perturbations?
5. Can the authors provide more explanation or examples to clarify the construction of tensors in Equation (1) and its purpose in achieving invariance?
6. How does the tradeoff of choosing a proper window function impact the stability of correlation computation across different batches?
7. Can the authors provide more context or references to support their claim that the underlying manifold structure can be used to align data batches?
8. Are there any specific applications or case studies where the proposed method has been successfully applied to real-world data sets? | Review | Review
The authors pointed out that the measurements in biology and natural science suffer from batch effects such as the variations between batches of data measured at different times or by different sensors. In order to analyze different batches of data, an alignment or a calibration is frequently needed. The authors propose to use that though there is variation among different batches, these batches all share an underlying intrinsic manifold structure, which may admit a set of alignable coordinates.
Technically, the authors propose to choose the diffusion kernel method, which is one of the spectral methods, to extract the harmonic like eigenfunctions defined on the manifold, for each of the batches of data. Using these harmonic-like coordinates, the authors assume there exists an isometric rotation in the between each pair of batches such that their coordinates can be aligned under this orthogonal rotation.
Comments:
Overall I think the problems pointed out do exist and this is an interesting proposal to use the manifold structure to align the data. But there are some weak points in this proposal:
1. It's well known that spectral methods are frequently sensitive to perturbations of the datasets. At the beginning of section 2.2 the authors propose to use a normalization to construct the kernel, however, I don't quite understand how this would solve the instability to perturbations.
2. In my opinion, the equation (1) is the most interesting construction in this paper. This motivation for this tensors construction is not strong enough and I would suggest put more detail into this construction. My understanding is the window functions g introduced here serve for an invariance purpose such that when the frequency slightly shift (or rotate), the correlation computed should be stable. But the tradeoff of choosing a proper window should be discussed carefully, potentially with different dataset since different dataset may have a different sensitivity to perturbations across different batches.
3. In section 3.1, the first motivating example is quite confusing. The authors demonstrated the alignment of two rotated MNIST digits, 3. For each digit, the underlying manifold is S1. S1 is diffeomorphic to its rotation. So I'm not so sure what's the underlying manifold geometry used to align them. My understanding is that this alignment doesn't come from the S1 manifold but comes from some additional structure in the image signal. Fig1(b) is also a little confusing that I couldn't figure out what's drawn there.
Overall, to use the underlying manifold structure to align data batches is an interesting and straightforward proposal, but I hope the authors can address these question carefully and make the argument stronger. |
ICLR | Title
Manifold Alignment via Feature Correspondence
Abstract
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
N/A
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
1 INTRODUCTION
In biology and other natural science settings we often have the problem that data are measured from the same system but with different sensors or in different days where sensors are calibrated differently. This is often termed batch effect in biology and can include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted. In such settings it is important to globally and locally align the datasets such that they can be combined for effective further analysis. Otherwise, measurement artifacts may dominate downstream analysis. For instance, clustering the data will group samples by measurement time or sensor used rather than by biological or meaningful differences between datapoints.
Recent works regard the two datasets as views of the same system and construct a multiview diffusion geometry but all of them require at least partial bijection, if not full one, between views (Lafon et al., 2006; Ham et al., 2003; 2005; Wang & Mahadevan, 2008; Tuia & Camps-Valls, 2016). Other work directly attempt to match data points directly in ambient space, or by local data geometry and these can be very sensitive to differences in sampling density rather than data geometry (Haghverdi et al., 2018). Here, we present a principled approach called harmonic alignment to for correct this type of effect based on the manifold assumption.
The manifold assumption holds that high dimensional data originates from an intrinsically low dimensional smoothly varying space that is mapped via nonlinear functions to observable high dimensional measurements. Thus, we assume that the datasets are from transformed versions of the same low dimensional manifold. We learn the manifolds separately from the two datasets using diffusion geometric approaches and then find an isometric transformation to map from one manifold to the other. Note that we are not aligning points to points. Indeed there may be sampling differences and density differences in the data. However, our manifold learning approach uses an anisotropic kernel that detects the geometry of the data to align rather than point-by-point matching which is done in other methods.
Our method involves first embedding each dataset separately into diffusion components, and then finding an isometric transformation that aligns these diffusion representations. To find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics in the graph space. The diffusion components are eigenvectors of a Markov-normalized data diffusion operator, whose eigenvalues indicate frequency of the eigenvector. We attempt to find a transformation from one set of eigenvectors to another, via feature correspondences in the data.
While datapoint correspondences may be difficult or impossible to obtain since many biological measurements are destructive, feature correspondences are often available. For instance single-cell measurements of cells from the same device, thus containing counts for the same genes, albeit affected by batch differences. Thus when corresponding features are transformed via the graph Fourier transform (GFT) into diffusion coordinates, the representations should be similar, with potentially small frequency-proximal perturbations. For instance, slowly varying features across the manifold should be load to low-frequency eigenvectors of the Markov matrix. This insight allows us to create a correlation matrix between the eigenvectors of one dataset to another based on correlation between feature loadings to the eigenvectors. However, since we know that eigenvectors represent frequency harmonics, we need not compute the entire correlation matrix but rather only the near-diagonal values. This implies that the two manifolds must be perturbed such that low and high frequency eigenvector space are similar. We then find a linear transformation between that maps the eigenvectors of one space into those of the other that maximizes these correlations by orthogonalizing this matrix. This transformation allows us to align the two datasets with each other. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant batch effects and sample-specific artifacts and low-pass filter this geometry to denoise the unified manifold. Thus, in addition to aligning the manifolds our method denoises the manifolds as well.
We demonstrate the results of our method on artificial manifolds created from rotated MNIST digits, corrupted MNIST digits, as well as on single-cell biological data measuring peripheral blood cells. In each case our method successfully aligns the manifolds such that they have appropriate neighbors within and across the two datasets. We show an application to transfer learning where a lazy classifier trained on one dataset is applied to the other dataset after alignment. Further, comparisons with recently developed methods such as the MNN-based method of Haghverdi et al. (2018) show significant improvements in performance and denoising ability.
2 HARMONIC ALIGNMENT
A typical and effective assumption in machine learning is that high dimensional data originates from an intrinsic low dimensional manifold that is mapped via nonlinear functions to observable high dimensional measurements; this is commonly referred to as the manifold assumption. Formally, let Md be a hidden d dimensional manifold that is only observable via a collection of n d nonlinear functions f1, . . . , fn :Md → R that enable its immersion in a high dimensional ambient space as F (M) = {f(z) = (f1(z), . . . , fn(z))T : z ∈ Md} ⊆ Rn from which data is collected. Conversely, given a dataset X = {x1, . . . , xN} ⊂ Rn of high dimensional observations, manifold learning methods assume its data points originate from a sampling Z = {zi}Ni=1 ∈ Md of the underlying manifold via xi = f(zi), i = 1, . . . , n, and aim to learn a low dimensional intrinsic representation that approximates the manifold geometry ofMd. A popular approach towards this manifold learning task is to construct diffusion geometry from data (Coifman & Lafon, 2006), and embed data into diffusion coordinates, which provide a natural global coordinate system derived from Laplace operator over manifold geometries, as explained in Section 2.1. However, this approach, as well as other manifold learning ones, implicitly assumes that the feature functions {fj}nj=1 represent data collection technologies (e.g., sensors or markers) that operate in a consistent manner on all samples in Z. While this assumption may be valid in some fields, biological data collection (and more generally, data collection in empirical sciences) is often highly affected by a family of phenomena known as batch effects, which introduce nonnegligible variance between different data batches due to various uncontrollable factors. These include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted.
Therefore, in such settings, one should consider a collection of S samples {X(s)}Ss=1, each originating from feature functions {f (s)j }nj=1 that aim to measure the same quantities in the data, but are also affected by sample-dependent artifacts. While each sample can be analyzed to find its intrinsic structure, their union into a single dataset X = ⋃S s=1X
(s) often yields an incoherent geometry biased by batch effects, where neither the relations between samples or within each sample can be clearly seen. To address such artifacts, and constrct a unified geometry of multiple batches (i.e., samples or datasets) together, we propose to first embed each batch separately in diffusion coordinates, and then find an isometric transformation that aligns these diffusion representations.
In order to find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics, as shown in graph signal processing (Shuman et al., 2013). As explained in Section 2.2, this duality allows us to capture cross-batch relations between diffusion coordinates, and orthogonalize the resulting matrix to provide a map between batch-specific diffusion representations. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant to both batch effects and batch-specific artifacts. While our approach generalizes naturally to any number of batches, for simplicity, we focus our formulation here on the case of two batches.
2.1 DIFFUSION GEOMETRY & GRAPH HARMONICS
The first step in our approach is to capture the intrinsic geometry of each batch X(s) using the diffusion maps method from Coifman & Lafon (2006), which non-linearly embeds the data in a new coordinate system (i.e., diffusion coordinates) that is often considered as representing a data manifold or more generally a diffusion geometry over the data. The diffusion maps construction starts by considering local similarities defined via a kernel K(x, y), x, y ∈ X(s) that capture local neighborhoods in the data. We note that a popular choice for K is the Gaussian kernel e− ‖x−y‖2 σ , where σ > 0 is interpreted as a user-configurable neighborhood size. Next, these similarities are normalized to defined transition probabilities p(x, y) = K(x,y)‖K(x,·)‖1 that are organized in an N × N row stochastic matrix P that describes a Markovian diffusion process over the intrinsic geometry of the data. Finally, a diffusion map is defined by taking the eigenvalues 1 = µ1 ≥ µ2 ≥ · · · ≥ µN and (corresponding) eignevectors {φj}Nj=1 of P, and mapping each data point x ∈ X(s) to an N dimensional vector Φt(x) = [µt1φ1(x), . . . , µ t NφN (x)]
T , where t represents a diffusion-time (i.e., number of transitions considered in the diffusion process). In this work, we denote the diffusion map for the entire datasetX(S) as Φ(s)t . We refer the reader to Coifman & Lafon (2006) for further details and mathematical derivation, but note that in general, as t increases, most of the eigenvalue weights µtj , j = 1, . . . , N , become numerically negligible, and thus truncated diffusion map coordinates (i.e., using only nonnegligible ones) can be used for dimensionality reduction purposes.
Much work has been done in various fields on applications of diffusion maps as a whole, as well as individual diffusion coordinates (i.e., eigenvectors of P ), in data analysis (Farbman et al., 2010; Barkan et al., 2013; Mahmoudi & Sapiro, 2009; Angerer et al., 2015; Haghverdi et al., 2016). In particular, as discussed in Coifman & Lafon (2006) and Nadler et al. (2006), the diffusion coordinates are closely related to eigenvectors of Laplace operators on manifolds, as well as their discretizations as eigenvectors of graph Laplacians, which were studied previously, for example, in Belkin & Niyogi (2002). Indeed, the similarities measured in K can be considered as determining edge weights of a graph structure defined over the data. Formally, we define this graph by considering every data point in X as a vertex on the graph, and then defining weighted edges between them via an N × N adjacency matrix W with Wi,j = K(xi, xj), i, j = 1 . . . N . Then, the (normalized) graph Laplacian is defined as L = I−D−1/2WD−1/2 with D being a diagonal degree matrix (i.e., with Di,i = ∑N j=1 Wi,j). Finally, it is easy to see that L = I −D1/2PD−1/2, and thus it can be verified that the eigenvectors of L can be written as ψj = D1/2φj , with corresponding eigenvalues λj = 1− µj . It should be noted that if data is uniformly sampled from a manifold (as considered in Belkin & Niyogi, 2002), these two sets of eigenvectors coincide and the diffusion coordinates can be considered as Laplacian eigenvectors (or eigenfunctions, in continuous settings).
A central tenet of graph signal processing is that the Laplacian eigenfunctions {ψj}Nj=1 can be regarded as generalized Fourier harmonics (Shuman et al., 2013; 2016), i.e., graph harmonics. In-
Algorithm 1 Manifold Alignment via Feature Correspondence
Input: Datasets X = {X(1),X(2)} where X(s) has N (s) observations by d(s) features Output: Aligned graph Laplacian L(Y ).
1: for X(s) ∈ X do 2: Compute the anisotropic weight matrix W(s) (Section 2.2) and degree matrix D(s) 3: Construct the normalized graph Laplacian L(s) and its truncated eigensystem
Λ̄(s) = diag [ λ
(s) i ]N(s) i=2 , Ψ̄(s) = [ ψ (s) i ]N(s) i=1 , with L(s)ψ(s)i = λ (s) i ψ (s) i and Λ 0
4: Compute the diffusion map Φ(s) = e−tΛ̄ (s) Ψ̄(s) 5: The spectral domain wavelet transform tensor Ĥ(s)i,j,k ( Equation 1). 6: end for 7: Compute intraband harmonic correlations between each dataset M′:,:,k (Section 2.2). 8: Compute the total interband correlation M = ∑τ k=1 M ′ :,:,k. 9: Orthogonalize M via SVD, T = UV T
10: Construct the transformed matrix E =
[ Φ(1) e−tΛ̄ (1) Ψ̄(1)T
e−tΛ̄ (2) Ψ̄(2)TT Φ(2)
] .
11: Embed E using a Gaussian kernel to obtain L(Y ).
deed, a classic result in spectral graph theory shows that the discrete Fourier basis can be derived as Laplacian eigenvectors of the ring graphs (see, e.g. Olfati-Saber, 2007, Proposition 10). Based on this interpretation, a graph Fourier transform (GFT) is defined on graph signals (i.e., functions f : X(s) → R over the vertices of the graph) as f̂(λj) = 〈f, ψj〉, j = 1, . . . , N , similar to the definition of the classic discrete Fourier transform (DFT). Further, we can also write the GFT in terms of the diffusion coordinates as f̂(λk) = 〈f,D1/2φj〉, given their relation to graph harmonics. Therefore, up to appropriate weighting, the diffusion coordinates can conceptually be interpreted as intrinsic harmonics of the data, and conversely, the graph harmonics can be considered (conceptually) as intrinsic coordinates of data manifolds. In Section 2.2, we leverage this duality between coordinates and harmonics in order to capture relations between data manifolds of individual batches, and then them in Section 2.3 to align their intrinsic coordinates and construct a unified data manifold over them.
2.2 CROSS-GRAPH HARMONIC CORRELATION
We now turn our attention to considering the relation between two batches X(s1), X(s2) via their their intrinsic data manifold structure, as it is captured by diffusion coordinates or, equivalently, graph harmonics. We note that, as discussed extensively in Coifman & Lafon (2006), a naı̈ve construction of an intrinsic data graph with a Gaussian kernel (as described, for simplicity, in Section 2.1) may be severely distorted by density variations in the data. Such distortion would detrimental in our case, as it would the resulting diffusion geometry and its harmonic structure would not longer reflect a stable (i.e., batch-invariant) intrinsic “shape” of the data. Therefore, we follow the normalization suggested in there to separate data geometry from density, and define a graph structure (i.e., adjacency matrix) over each batch via an anistotropic kernel given by
W (s) i,j =
K(x(s)i , x (s) j )
‖K(x(s)i , ·)‖ 1/2 1 ‖K(x (s) j , ·)‖ 1/2 1
, i, j = 1, . . . , N (s), s ∈ {s1, s2},
where K is the previously defined Gaussian kernel. This graph structure is then used, as previously described, to construct the intrinsic harmonic structure given by {ψ(s)j }Nj=1 on each batch. While the intrinsic geometry constructed by our graph structures should describe similar “shapes” for the two datasets, there is no guarantee that their computed intrinsic coordinates will match. Indeed, it is easy to see how various permutations of these coordinates can be obtained if some eigenvalues have multiplicities greater than one (i.e., their monotonic order is no longer deterministic), but even beyond that, in practical settings batch effects often result in various misalignments
(e.g., rotations or other affine transformations) between derived intrinsic coordinates. Therefore, to properly recover relations between multiple batches, we aim to quantify relations between their coordinates, or more accurately, between their graph harmonics.
We note that if we even a partial overlap between data points in the two batches, this task would be trivially enabled by taking correlations between these harmonics. However, given that here we assume a setting without such predetermined overlap, we have to rely on other properties that are independent of individual data points. To this end, we now consider the feature functions {f (s)j }nj=1 and our initial assumption that corresponding functions aim to measure equivalent quantities in the batches (or datasets). Therefore, while they may differ in the original raw form, we expect their expression over the intrinsic structure of the data (e.g., as captured by GFT coefficients) to correlate, at least partially, between batches. Therefore, we use this property to compute crossbatch correlations between graph harmonics based on the GFT of corresponding data features. To formulate this, it is convenient to extend the definition of the GFT from functions (or vectors) to matrices, by slight abuse of the inner product notation, as X̂(s) = 〈X,Ψ(s)〉 = [Ψ(s)]TX(s), where X consists of data features as columns and Ψ(s) has graph harmonics as columns (both with rows representing data points).
Notice that the resulting Fourier matrix X̂(s), for each batch, no longer depends on individual data points, and instead it expresses the graph harmonics in terms of data features. Therefore, we can now use this matrix to formulate a cross-batch harmonic correlations by considering inner products between rows of these matrices. Further, we need not consider all ther correlations between graph harmonics, since we also have access to their corresponding frequency information, expressed via the associated Laplacian eigenvalues {λ(s)j }N (s)
j=1 . Therefore, instead of computing correlations between every pair of harmonics across batches, we only consider them within local frequency bands, defined via appropriate graph filters, as exlpained in the following.
Let g(t) be a smooth window defined on the interval [−0.5, 0.5] as g(t) = sin ( 0.5π cos (πt) 2 )
. Then, by translating this window along the along the real line, we obtain τ equally spaced wavelet windows that can be applied to the eigenvalues λ(s)j in order to smoothly partition the spectrum of each data graph. This construction is known as the itersine filter bank, which can be shown to be a tight frame (Perraudin et al., 2014). The resulting windows gξi(λ) are centered at frequencies Ξ = {ξ1, . . . , ξτ}. The generating function for these wavelets ensures that each gξi halfway overlaps with gξi+1 . This property implies that there are smooth transitions between the weights of consecutive frequency bands. Furthermore, as a tight frame, this filterbank has the property that∑τ i=1 hξi(λ) = 1 for any eigenvalue. This choice ensures that any filtering we do using the filter bank G = {hξi}τi=1(λ) will behave uniformly across the spectrum. Together, these two properties imply that cross-batch correlations between harmonics within and between bands across the respective batch spectra will be robust. To obtain such bandlimited correlations we construct the following filterbank tensor
Ĥ (s) i,j,k = gξk(λ (s) i )ψ (s)T i X (s) :,j for 2 ≤ i ≤ N (s). (1)
Each Ĥ(s)(·,·,k) of this matrix corresponds to the Fourier matrix X̂ (s) with rows scaled by gξk . Then, we use these filterbank tensors to compute bandlimited correlations via M′(·,·,k) = Ĥ (s1) :,:,kĤ (s2)T (·,·,k),
and finally merge these to generate a combined matrix M(s1,s2) = ∑τ k=1 M ′ (·,·,k), which we refer to as the harmonic (cross-batch) correlation matrix. This step, when combined with the half-overlaps discussed above, allows flexibility in aligning harmonics across bands, which is demonstrated in practice in Section 2.3.
2.3 ISOMETRIC ALIGNMENT
Given the harmonic correlation matrix M(s1,s2), we now define an isometric transformation between the intrinsic coordinate systems of the two data manifolds. Such transformation ensures our alignment fits the two coordinate systems together without breaking the rigid structure of each batch, thus preserving their intrinsic structure. To formulate such transformation, we recall that isometric transformations are given by orthogonal matrices, and thus we can rephrase our task as finding the best approximation of M(s1,s2) by an orthogonal matrix. Such approximation is a well studied problem,
dating back to Schönemann (1966), which showed that it can be obtained directly the singular value decomposition M = USV T by taking T(s1,s2) = UV T .
Finally, given the isometric transformation defined by T(s1,s2), we can now align of the data manifolds of two batches, and define a unified intrinsic coordinate system for the entire data. While such alignment could equivalently be phrased in terms of diffusion coordinates {φ(s)j }N (s) j=1 or harmonic coordinates {ψj}N(s)j=1 , we opt here for the latter, as it relates more directly to the computed harmonic correlations. Therefore, we construct the transformed embedding matrix E as
E =
[ Ψ̄(s1) [Ψ̄(s1)] T
Ψ̄(s2) TT Ψ̄(s2)
] exp ( −t [ Λ̄(s1) 0
0 Λ̄(s2)
]) . (2)
where we drop the superscript for T, as they are clear from context, Λ̄(s) are diagonal matrices that consists of the nonzero Laplacian eigenvalues of each view, and Ψ̄ consist of the corresponding eigenvectors (i.e., harmonics) as its columns. We note that the truncated of zero eigenvalues correspond to zero frequencies (i.e., flat constant harmonics), and therefore they only encode global shifts that we anyway aim to remove in the alignment process. Accordingly, this truncation is also applied to the harmonic correlation matrix M(s1,s2) prior to its orthogonalization. Finally, we note that this construction is equivalent to the diffusion map, albeit using a slightly different derivation of a discretized heat kernel (popular, for example, in graph signal processing works such as (Shuman et al., 2016)), with the parameter t again serving an analogous purpose to diffusion time.
3 EMPIRICAL RESULTS
3.1 ROTATIONAL ALIGNMENT
As a proof of principle we first demonstrate harmonic alignment of two circular manifolds. To generate these manifolds, we rotated two different MNIST examples of the digit ’3’ 360 degrees and sampled a point for each degree (See figure 1a). As we noted in section 2.2, the manifold coordinates obtained by diffusion maps are invariant to the phase of the data. In this example it is clear that each ’3’ manifold is out of phase with the other.
Figure 1b demonstrates the simple rotation that is learned by harmonic alignment between the two embeddings. On the left side, we see the out-of-phase embeddings. Taking nearest neighbors in this space illustrates the disconnection between the embeddings: nearest neighbors are only formed for within-sample points. After alignment, however, we see that the embeddings are in phase with each other because nearest neighbors in the aligned space span both samples and are in the same orientation with each other.
3.2 FEATURE CORRUPTION
Next, we assessed the ability of harmonic alignment in recovering k-neighborhoods after random feature corruption (figure 2).
To do this, we drew random samples from MNIST X(1) and X(2) of N (1) = N (2) = 1000. Then, for each trial in this experiment we drew 7842 samples from a unit normal distribution to create a 784 × 784 random matrix. We orthogonalized this matrix to yield the corruption matrix O0. To vary the amount of feature corruption, we then randomly substituted b0.01 ∗ p ∗ 784c columns from I to make Op. Right multiplication of X(2) by this matrix yielded corrupted images with only p% preserved pixels (figure 2b, ’corrupted’). To assess the recovery of k-neighborhoods, we then performed a lazy classification on X(2)Op by only using the labels of its neighbors in X(1). The results of this experiment, performed for p = {0, 5, 10, . . . 95, 100} are reported in figure 2a. For robustness, at each p we sampled three different non-overlapping pairs X(1) and X(2) and for each pair we sampled three Op matrices each with random identity columns, for a total of nine trials per p.
In general, unaligned, mutual nearest neighbors (MNN), and harmonic alignment with any filter set cannot recover k-neighborhoods under total corruption; 10% serves as a baseline accuracy that results from the rotation having 10% overlap in the manifold space. On the other hand, for small filter
choices we observe that harmonic alignment quickly recovers ≥ 80% accuracy and outperforms MNN and unaligned classifications consistently except under high correspondence.
Next we examined the ability of harmonic alignment to reconstruct corrupted data (figure 2b). We performed the same corruption procedure as before with p = 25 and selected 10 examples of each digit in MNIST. We show the ground truth fromX(2) and the corrupted resultX(2)O25 in figure 2b. Then, a reconstruction was performed by setting each pixel in a new image to the dominant class average of the ten nearest X(1) neighbors. In the unaligned case we see that most examples turn into smeared fives or ones; this is likely the intersection formed by X(1) and X(2)O25 that accounts for the 10% baseline accuracy in figure 2a. On the other hand, the reconstructions produced by harmonic alignment resemble their original input examples.
3.3 COMPARISONS
In figure 3 we compare the runtime, k-nn accuracy, and transfer learning capabilities of our method with two other contemporary alignment methods. First, we examine the unsupervised algorithm proposed by Wang & Mahadevan (2009) for generating weight matrices between two different samples. The algorithm first creates a local distance matrix of size k around each point and its four nearest neighbors. Then it computes an optimal match between k-nn distance matrices of each pair of points in X(1) and X(2) by comparing all k! permutations of the k-nn matrices and computing the minimal frobenius norm between such permuted matrices. We report runtime results for k = 5, as k = 10 failed to complete after running for 8 hours. Because the Wang&Mahadevan (2009) method merely gives a weight matrix that can be used with separate algorithms for computing the final features, we report accuracy results using their implementation. Regardless of input size, we were unable to recover k-neighborhoods for datasets with 35% uncorrupted columns (figure 3b) despite the method’s computational cost (figure 3a).
A more scalable approach to manifold alignment has emerged recently in the computational biology (Haghverdi et al., 2018) literature. This approach uses mutual nearest neighbor (MNN) matching to
compute mapping between datasets based on the assumption that if two points are truly neighbors they will resemble each other in both datasets. Because this approach amounts to building a knearest neighbors graph for each dataset and then choosing a set of neighbors between each dataset, MNN scales comparably to our method (figure 3a. Additionally, MNN is able to recover 20-30% of k-neighborhoods when only 35% of features match (figure 3b); this is an improvement over Wang but is substantially lower than what harmonic alignment achieves. We note that the performance of harmonic alignment was correlated with input size whereas MNN did not improve with more points.
3.4 TRANSFER LEARNING
An interesting use of manifold alignment algorithms is transfer learning. In this setting, an algorithm is trained to perform well on one dataset, and the goal is to extend the algorithm to the other dataset after alignment. In figure 3c we explore the utility of harmonic alignment in transfer learning and compare it to MNN and the method proposed by Wang & Mahadevan (2009).
In this experiment, we first draw 1000 uncorrupted examples of MNIST digits, and construct a diffusion map to use as our training set. Next, we took 65% corrupted unlabeled points (see section 3.2) in batches of 1, 000, 2, 000, 4, 000, 8, 000 as a test set to perform lazy classification on using the labels from the uncorrupted examples. At 1:8 test:training sample sizes, Harmonic alignment outperformed Wang and MNN by aligning upto 60% correct k-neighborhoods. In addition to showing the use of manifold alignment in transfer learning, this demonstrates the robustness of our algorithm to dataset imbalances.
3.5 BIOLOGICAL DATA
To illustrate the need for robust manifold alignment in computational biology, we turn to a simple real-world example obtained from Amodio et al. (2018) (figure 4). This dataset was collected by mass cytometry (CyTOF) of peripheral blood mononuclear cells (PBMC) from patients who contracted dengue fever. Subsequently, the Montgomery lab at Yale University experimentally introduced these PBMCs to Zika virus strains.
The canonical response to dengue infection is upregulation of interferon gamma (IFNγ) (Chesler & Reiss, 2002; Chakravarti & Kumaria, 2006; Braga et al., 2001). During early immune response, IFNγ works in tandem with acute phase cytokines such as tumor necrosis factor α to induce febrile response and inhibit viral replication (Ohmori et al., 1997). In the PBMC dataset, we thus expect to see upregulation of these two cytokines together, which we explore in 4.
In figure 4a, we show the relationship between IFNγ and TNFα without denoising. Note that there is a substantial difference between the IFNγ distributions of sample 1 and sample 2 (Earth Mover’s Distance (EMD) = 2.699). In order to identify meaningful relationships in CyTOF data, it is common to denoise it first. We used a graph filter to denoise the cytokine data (Van Dijk et al., 2018). The results of this denoising are shown in figure 4b. This procedure introduced more technical artifacts by enhancing the difference between batches, as seen by the increased EMD (3.127) between the IFNγ distributions of both patients. This is likely due to a substantial connectivity difference between the two batch subgraphs in the total graph.
Next, we performed harmonic alignment of the two patient profiles. We show the results of this in figure 4c. Harmonic alignment corrected the difference between IFNγ distributions and restored the canonical correlation of IFNγ and TNFα. This example illustrates the utlity of harmonic alignment for biological data, where it can be used for integrated analysis of data collected across different experiments, patients, and time points.
4 CONCLUSION
We presented a novel method for aligning or batch-normalizing two datasets that involves learning and aligning their intrinsic manifold dimensions. Our method leverages the fact that common or corresponding features in the two datasets should have similar harmonics on the graph of the data. Our harmonic alignment method finds an isometric transformation that maximizes the similarity of frequency harmonics of common features. Results show that our method successfully aligns artifi-
cially misaligned as well as biological data containing batch effect. Our method has the advantages that it aligns manifold geometry and not density (and thus is insensitive to sampling differences in data) and further our method denoises the datasets to obtain alignments of significant manifold dimensions rather than noise. Future applications of harmonic alignment can include integration of data from different measurement types performed on the same system, where features have known correlations. | 1. What is the main contribution of the paper regarding noisy measurements and data batches?
2. What are the strengths of the proposed approach in terms of aligning manifolds accurately and achieving superior classification results?
3. Do you have any concerns about the experimental section, particularly regarding comparisons with other algorithms and dataset complexity?
4. How can the authors improve their experiments to provide a clearer understanding of the proposed method's capabilities? | Review | Review
This paper tackles the problem of noisy measurements collected from same samples that may result in different experimental data collection scenarios, especially in biology. Given S different data batches of the same samples, this paper aims to perform manifold alignment between each batch using feature correspondences.
The motivation is that even though data points from each experiments may differ due to noise coming from different factors, they should at least exhibit some correlations in the feature space. Such correlation is exploited by embedding each batch of data in a space represented by diffusion coordinates computed using an anisotropic kernel. Inter-batch correlations are computed between diffusion coordinates of each batch, which are exploited to construct an isometric transformation between each pair of diffusion coordinates of batch datasets. The later transformation is used to construct an aligned graph Laplacian where each batch have similar representations.
This paper tackles an important problem using a novel approach where instead of aligning each pair of data-points it is attempting to align geometries of batch specific manifolds. The authors show through a toy experiment on MNIST that the proposed algorithm indeed is able to align manifolds accurately. Moreover it is also able to perform manifold denoising and achieves superior classification results compared to two existing approaches. Finally, the proposed algorithm is applied to a practical biological case and shows that it is indeed able to align data from two different immune profiles.
Although the paper tackles efficiently an important problem, I am concerned about the experimental section and think it would be improved by taking the following points into account:
• The proposed approach is compared only with two other algorithms. For example, one could compare the denoising ability to [Hein and Maier 2006: manifold denoising] or [Cui et al: Generalized unsupervised Manifold Alignment] for manifold alignment. Furthermore, it would be informative to see how the proposed approach compares to recent domain adaptation approaches as they attempt to map data from different domains into a shared
representation which is rather similar to what the proposed algorithm is doing.
• Experiments are all performed using rather simple datasets. It would be interesting to see how the algorithms would perform on slightly more complicated images such as Cifar 10 for example.
• It is not clear what are the next steps to perform after obtaining Eq. 2
• It is not clear what are the number of filters in Figure 2 a).
• Figure 1 needs to be clarified further: What are DM1, DM2, DM3 mean? What are the columns and rows in Figure1-b (bottom)? |
ICLR | Title
Manifold Alignment via Feature Correspondence
Abstract
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
N/A
We propose a novel framework for combining datasets via alignment of their associated intrinsic dimensions. Our approach assumes that the two datasets are sampled from a common latent space, i.e., they measure equivalent systems. Thus, we expect there to exist a natural (albeit unknown) alignment of the data manifolds associated with the intrinsic geometry of these datasets, which are perturbed by measurement artifacts in the sampling process. Importantly, we do not assume any individual correspondence (partial or complete) between data points. Instead, we rely on our assumption that a subset of data features have correspondence across datasets. We leverage this assumption to estimate relations between intrinsic manifold dimensions, which are given by diffusion map coordinates over each of the datasets. We compute a correlation matrix between diffusion coordinates of the datasets by considering graph (or manifold) Fourier coefficients of corresponding data features. We then orthogonalize this correlation matrix to form an isometric transformation between the diffusion maps of the datasets. Finally, we apply this transformation to the diffusion coordinates and construct a unified diffusion geometry of the datasets together. We show that this approach successfully corrects misalignment artifacts, and allows for integrated data.
1 INTRODUCTION
In biology and other natural science settings we often have the problem that data are measured from the same system but with different sensors or in different days where sensors are calibrated differently. This is often termed batch effect in biology and can include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted. In such settings it is important to globally and locally align the datasets such that they can be combined for effective further analysis. Otherwise, measurement artifacts may dominate downstream analysis. For instance, clustering the data will group samples by measurement time or sensor used rather than by biological or meaningful differences between datapoints.
Recent works regard the two datasets as views of the same system and construct a multiview diffusion geometry but all of them require at least partial bijection, if not full one, between views (Lafon et al., 2006; Ham et al., 2003; 2005; Wang & Mahadevan, 2008; Tuia & Camps-Valls, 2016). Other work directly attempt to match data points directly in ambient space, or by local data geometry and these can be very sensitive to differences in sampling density rather than data geometry (Haghverdi et al., 2018). Here, we present a principled approach called harmonic alignment to for correct this type of effect based on the manifold assumption.
The manifold assumption holds that high dimensional data originates from an intrinsically low dimensional smoothly varying space that is mapped via nonlinear functions to observable high dimensional measurements. Thus, we assume that the datasets are from transformed versions of the same low dimensional manifold. We learn the manifolds separately from the two datasets using diffusion geometric approaches and then find an isometric transformation to map from one manifold to the other. Note that we are not aligning points to points. Indeed there may be sampling differences and density differences in the data. However, our manifold learning approach uses an anisotropic kernel that detects the geometry of the data to align rather than point-by-point matching which is done in other methods.
Our method involves first embedding each dataset separately into diffusion components, and then finding an isometric transformation that aligns these diffusion representations. To find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics in the graph space. The diffusion components are eigenvectors of a Markov-normalized data diffusion operator, whose eigenvalues indicate frequency of the eigenvector. We attempt to find a transformation from one set of eigenvectors to another, via feature correspondences in the data.
While datapoint correspondences may be difficult or impossible to obtain since many biological measurements are destructive, feature correspondences are often available. For instance single-cell measurements of cells from the same device, thus containing counts for the same genes, albeit affected by batch differences. Thus when corresponding features are transformed via the graph Fourier transform (GFT) into diffusion coordinates, the representations should be similar, with potentially small frequency-proximal perturbations. For instance, slowly varying features across the manifold should be load to low-frequency eigenvectors of the Markov matrix. This insight allows us to create a correlation matrix between the eigenvectors of one dataset to another based on correlation between feature loadings to the eigenvectors. However, since we know that eigenvectors represent frequency harmonics, we need not compute the entire correlation matrix but rather only the near-diagonal values. This implies that the two manifolds must be perturbed such that low and high frequency eigenvector space are similar. We then find a linear transformation between that maps the eigenvectors of one space into those of the other that maximizes these correlations by orthogonalizing this matrix. This transformation allows us to align the two datasets with each other. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant batch effects and sample-specific artifacts and low-pass filter this geometry to denoise the unified manifold. Thus, in addition to aligning the manifolds our method denoises the manifolds as well.
We demonstrate the results of our method on artificial manifolds created from rotated MNIST digits, corrupted MNIST digits, as well as on single-cell biological data measuring peripheral blood cells. In each case our method successfully aligns the manifolds such that they have appropriate neighbors within and across the two datasets. We show an application to transfer learning where a lazy classifier trained on one dataset is applied to the other dataset after alignment. Further, comparisons with recently developed methods such as the MNN-based method of Haghverdi et al. (2018) show significant improvements in performance and denoising ability.
2 HARMONIC ALIGNMENT
A typical and effective assumption in machine learning is that high dimensional data originates from an intrinsic low dimensional manifold that is mapped via nonlinear functions to observable high dimensional measurements; this is commonly referred to as the manifold assumption. Formally, let Md be a hidden d dimensional manifold that is only observable via a collection of n d nonlinear functions f1, . . . , fn :Md → R that enable its immersion in a high dimensional ambient space as F (M) = {f(z) = (f1(z), . . . , fn(z))T : z ∈ Md} ⊆ Rn from which data is collected. Conversely, given a dataset X = {x1, . . . , xN} ⊂ Rn of high dimensional observations, manifold learning methods assume its data points originate from a sampling Z = {zi}Ni=1 ∈ Md of the underlying manifold via xi = f(zi), i = 1, . . . , n, and aim to learn a low dimensional intrinsic representation that approximates the manifold geometry ofMd. A popular approach towards this manifold learning task is to construct diffusion geometry from data (Coifman & Lafon, 2006), and embed data into diffusion coordinates, which provide a natural global coordinate system derived from Laplace operator over manifold geometries, as explained in Section 2.1. However, this approach, as well as other manifold learning ones, implicitly assumes that the feature functions {fj}nj=1 represent data collection technologies (e.g., sensors or markers) that operate in a consistent manner on all samples in Z. While this assumption may be valid in some fields, biological data collection (and more generally, data collection in empirical sciences) is often highly affected by a family of phenomena known as batch effects, which introduce nonnegligible variance between different data batches due to various uncontrollable factors. These include, for example, drastic variations between subjects, experimental settings, or even times of day when an experiment is conducted.
Therefore, in such settings, one should consider a collection of S samples {X(s)}Ss=1, each originating from feature functions {f (s)j }nj=1 that aim to measure the same quantities in the data, but are also affected by sample-dependent artifacts. While each sample can be analyzed to find its intrinsic structure, their union into a single dataset X = ⋃S s=1X
(s) often yields an incoherent geometry biased by batch effects, where neither the relations between samples or within each sample can be clearly seen. To address such artifacts, and constrct a unified geometry of multiple batches (i.e., samples or datasets) together, we propose to first embed each batch separately in diffusion coordinates, and then find an isometric transformation that aligns these diffusion representations.
In order to find such transformation, we utilize the duality between diffusion coordinates and geometric harmonics that act as generalized Fourier harmonics, as shown in graph signal processing (Shuman et al., 2013). As explained in Section 2.2, this duality allows us to capture cross-batch relations between diffusion coordinates, and orthogonalize the resulting matrix to provide a map between batch-specific diffusion representations. Finally, given an aligned representation, we build a robust unified diffusion geometry that is invariant to both batch effects and batch-specific artifacts. While our approach generalizes naturally to any number of batches, for simplicity, we focus our formulation here on the case of two batches.
2.1 DIFFUSION GEOMETRY & GRAPH HARMONICS
The first step in our approach is to capture the intrinsic geometry of each batch X(s) using the diffusion maps method from Coifman & Lafon (2006), which non-linearly embeds the data in a new coordinate system (i.e., diffusion coordinates) that is often considered as representing a data manifold or more generally a diffusion geometry over the data. The diffusion maps construction starts by considering local similarities defined via a kernel K(x, y), x, y ∈ X(s) that capture local neighborhoods in the data. We note that a popular choice for K is the Gaussian kernel e− ‖x−y‖2 σ , where σ > 0 is interpreted as a user-configurable neighborhood size. Next, these similarities are normalized to defined transition probabilities p(x, y) = K(x,y)‖K(x,·)‖1 that are organized in an N × N row stochastic matrix P that describes a Markovian diffusion process over the intrinsic geometry of the data. Finally, a diffusion map is defined by taking the eigenvalues 1 = µ1 ≥ µ2 ≥ · · · ≥ µN and (corresponding) eignevectors {φj}Nj=1 of P, and mapping each data point x ∈ X(s) to an N dimensional vector Φt(x) = [µt1φ1(x), . . . , µ t NφN (x)]
T , where t represents a diffusion-time (i.e., number of transitions considered in the diffusion process). In this work, we denote the diffusion map for the entire datasetX(S) as Φ(s)t . We refer the reader to Coifman & Lafon (2006) for further details and mathematical derivation, but note that in general, as t increases, most of the eigenvalue weights µtj , j = 1, . . . , N , become numerically negligible, and thus truncated diffusion map coordinates (i.e., using only nonnegligible ones) can be used for dimensionality reduction purposes.
Much work has been done in various fields on applications of diffusion maps as a whole, as well as individual diffusion coordinates (i.e., eigenvectors of P ), in data analysis (Farbman et al., 2010; Barkan et al., 2013; Mahmoudi & Sapiro, 2009; Angerer et al., 2015; Haghverdi et al., 2016). In particular, as discussed in Coifman & Lafon (2006) and Nadler et al. (2006), the diffusion coordinates are closely related to eigenvectors of Laplace operators on manifolds, as well as their discretizations as eigenvectors of graph Laplacians, which were studied previously, for example, in Belkin & Niyogi (2002). Indeed, the similarities measured in K can be considered as determining edge weights of a graph structure defined over the data. Formally, we define this graph by considering every data point in X as a vertex on the graph, and then defining weighted edges between them via an N × N adjacency matrix W with Wi,j = K(xi, xj), i, j = 1 . . . N . Then, the (normalized) graph Laplacian is defined as L = I−D−1/2WD−1/2 with D being a diagonal degree matrix (i.e., with Di,i = ∑N j=1 Wi,j). Finally, it is easy to see that L = I −D1/2PD−1/2, and thus it can be verified that the eigenvectors of L can be written as ψj = D1/2φj , with corresponding eigenvalues λj = 1− µj . It should be noted that if data is uniformly sampled from a manifold (as considered in Belkin & Niyogi, 2002), these two sets of eigenvectors coincide and the diffusion coordinates can be considered as Laplacian eigenvectors (or eigenfunctions, in continuous settings).
A central tenet of graph signal processing is that the Laplacian eigenfunctions {ψj}Nj=1 can be regarded as generalized Fourier harmonics (Shuman et al., 2013; 2016), i.e., graph harmonics. In-
Algorithm 1 Manifold Alignment via Feature Correspondence
Input: Datasets X = {X(1),X(2)} where X(s) has N (s) observations by d(s) features Output: Aligned graph Laplacian L(Y ).
1: for X(s) ∈ X do 2: Compute the anisotropic weight matrix W(s) (Section 2.2) and degree matrix D(s) 3: Construct the normalized graph Laplacian L(s) and its truncated eigensystem
Λ̄(s) = diag [ λ
(s) i ]N(s) i=2 , Ψ̄(s) = [ ψ (s) i ]N(s) i=1 , with L(s)ψ(s)i = λ (s) i ψ (s) i and Λ 0
4: Compute the diffusion map Φ(s) = e−tΛ̄ (s) Ψ̄(s) 5: The spectral domain wavelet transform tensor Ĥ(s)i,j,k ( Equation 1). 6: end for 7: Compute intraband harmonic correlations between each dataset M′:,:,k (Section 2.2). 8: Compute the total interband correlation M = ∑τ k=1 M ′ :,:,k. 9: Orthogonalize M via SVD, T = UV T
10: Construct the transformed matrix E =
[ Φ(1) e−tΛ̄ (1) Ψ̄(1)T
e−tΛ̄ (2) Ψ̄(2)TT Φ(2)
] .
11: Embed E using a Gaussian kernel to obtain L(Y ).
deed, a classic result in spectral graph theory shows that the discrete Fourier basis can be derived as Laplacian eigenvectors of the ring graphs (see, e.g. Olfati-Saber, 2007, Proposition 10). Based on this interpretation, a graph Fourier transform (GFT) is defined on graph signals (i.e., functions f : X(s) → R over the vertices of the graph) as f̂(λj) = 〈f, ψj〉, j = 1, . . . , N , similar to the definition of the classic discrete Fourier transform (DFT). Further, we can also write the GFT in terms of the diffusion coordinates as f̂(λk) = 〈f,D1/2φj〉, given their relation to graph harmonics. Therefore, up to appropriate weighting, the diffusion coordinates can conceptually be interpreted as intrinsic harmonics of the data, and conversely, the graph harmonics can be considered (conceptually) as intrinsic coordinates of data manifolds. In Section 2.2, we leverage this duality between coordinates and harmonics in order to capture relations between data manifolds of individual batches, and then them in Section 2.3 to align their intrinsic coordinates and construct a unified data manifold over them.
2.2 CROSS-GRAPH HARMONIC CORRELATION
We now turn our attention to considering the relation between two batches X(s1), X(s2) via their their intrinsic data manifold structure, as it is captured by diffusion coordinates or, equivalently, graph harmonics. We note that, as discussed extensively in Coifman & Lafon (2006), a naı̈ve construction of an intrinsic data graph with a Gaussian kernel (as described, for simplicity, in Section 2.1) may be severely distorted by density variations in the data. Such distortion would detrimental in our case, as it would the resulting diffusion geometry and its harmonic structure would not longer reflect a stable (i.e., batch-invariant) intrinsic “shape” of the data. Therefore, we follow the normalization suggested in there to separate data geometry from density, and define a graph structure (i.e., adjacency matrix) over each batch via an anistotropic kernel given by
W (s) i,j =
K(x(s)i , x (s) j )
‖K(x(s)i , ·)‖ 1/2 1 ‖K(x (s) j , ·)‖ 1/2 1
, i, j = 1, . . . , N (s), s ∈ {s1, s2},
where K is the previously defined Gaussian kernel. This graph structure is then used, as previously described, to construct the intrinsic harmonic structure given by {ψ(s)j }Nj=1 on each batch. While the intrinsic geometry constructed by our graph structures should describe similar “shapes” for the two datasets, there is no guarantee that their computed intrinsic coordinates will match. Indeed, it is easy to see how various permutations of these coordinates can be obtained if some eigenvalues have multiplicities greater than one (i.e., their monotonic order is no longer deterministic), but even beyond that, in practical settings batch effects often result in various misalignments
(e.g., rotations or other affine transformations) between derived intrinsic coordinates. Therefore, to properly recover relations between multiple batches, we aim to quantify relations between their coordinates, or more accurately, between their graph harmonics.
We note that if we even a partial overlap between data points in the two batches, this task would be trivially enabled by taking correlations between these harmonics. However, given that here we assume a setting without such predetermined overlap, we have to rely on other properties that are independent of individual data points. To this end, we now consider the feature functions {f (s)j }nj=1 and our initial assumption that corresponding functions aim to measure equivalent quantities in the batches (or datasets). Therefore, while they may differ in the original raw form, we expect their expression over the intrinsic structure of the data (e.g., as captured by GFT coefficients) to correlate, at least partially, between batches. Therefore, we use this property to compute crossbatch correlations between graph harmonics based on the GFT of corresponding data features. To formulate this, it is convenient to extend the definition of the GFT from functions (or vectors) to matrices, by slight abuse of the inner product notation, as X̂(s) = 〈X,Ψ(s)〉 = [Ψ(s)]TX(s), where X consists of data features as columns and Ψ(s) has graph harmonics as columns (both with rows representing data points).
Notice that the resulting Fourier matrix X̂(s), for each batch, no longer depends on individual data points, and instead it expresses the graph harmonics in terms of data features. Therefore, we can now use this matrix to formulate a cross-batch harmonic correlations by considering inner products between rows of these matrices. Further, we need not consider all ther correlations between graph harmonics, since we also have access to their corresponding frequency information, expressed via the associated Laplacian eigenvalues {λ(s)j }N (s)
j=1 . Therefore, instead of computing correlations between every pair of harmonics across batches, we only consider them within local frequency bands, defined via appropriate graph filters, as exlpained in the following.
Let g(t) be a smooth window defined on the interval [−0.5, 0.5] as g(t) = sin ( 0.5π cos (πt) 2 )
. Then, by translating this window along the along the real line, we obtain τ equally spaced wavelet windows that can be applied to the eigenvalues λ(s)j in order to smoothly partition the spectrum of each data graph. This construction is known as the itersine filter bank, which can be shown to be a tight frame (Perraudin et al., 2014). The resulting windows gξi(λ) are centered at frequencies Ξ = {ξ1, . . . , ξτ}. The generating function for these wavelets ensures that each gξi halfway overlaps with gξi+1 . This property implies that there are smooth transitions between the weights of consecutive frequency bands. Furthermore, as a tight frame, this filterbank has the property that∑τ i=1 hξi(λ) = 1 for any eigenvalue. This choice ensures that any filtering we do using the filter bank G = {hξi}τi=1(λ) will behave uniformly across the spectrum. Together, these two properties imply that cross-batch correlations between harmonics within and between bands across the respective batch spectra will be robust. To obtain such bandlimited correlations we construct the following filterbank tensor
Ĥ (s) i,j,k = gξk(λ (s) i )ψ (s)T i X (s) :,j for 2 ≤ i ≤ N (s). (1)
Each Ĥ(s)(·,·,k) of this matrix corresponds to the Fourier matrix X̂ (s) with rows scaled by gξk . Then, we use these filterbank tensors to compute bandlimited correlations via M′(·,·,k) = Ĥ (s1) :,:,kĤ (s2)T (·,·,k),
and finally merge these to generate a combined matrix M(s1,s2) = ∑τ k=1 M ′ (·,·,k), which we refer to as the harmonic (cross-batch) correlation matrix. This step, when combined with the half-overlaps discussed above, allows flexibility in aligning harmonics across bands, which is demonstrated in practice in Section 2.3.
2.3 ISOMETRIC ALIGNMENT
Given the harmonic correlation matrix M(s1,s2), we now define an isometric transformation between the intrinsic coordinate systems of the two data manifolds. Such transformation ensures our alignment fits the two coordinate systems together without breaking the rigid structure of each batch, thus preserving their intrinsic structure. To formulate such transformation, we recall that isometric transformations are given by orthogonal matrices, and thus we can rephrase our task as finding the best approximation of M(s1,s2) by an orthogonal matrix. Such approximation is a well studied problem,
dating back to Schönemann (1966), which showed that it can be obtained directly the singular value decomposition M = USV T by taking T(s1,s2) = UV T .
Finally, given the isometric transformation defined by T(s1,s2), we can now align of the data manifolds of two batches, and define a unified intrinsic coordinate system for the entire data. While such alignment could equivalently be phrased in terms of diffusion coordinates {φ(s)j }N (s) j=1 or harmonic coordinates {ψj}N(s)j=1 , we opt here for the latter, as it relates more directly to the computed harmonic correlations. Therefore, we construct the transformed embedding matrix E as
E =
[ Ψ̄(s1) [Ψ̄(s1)] T
Ψ̄(s2) TT Ψ̄(s2)
] exp ( −t [ Λ̄(s1) 0
0 Λ̄(s2)
]) . (2)
where we drop the superscript for T, as they are clear from context, Λ̄(s) are diagonal matrices that consists of the nonzero Laplacian eigenvalues of each view, and Ψ̄ consist of the corresponding eigenvectors (i.e., harmonics) as its columns. We note that the truncated of zero eigenvalues correspond to zero frequencies (i.e., flat constant harmonics), and therefore they only encode global shifts that we anyway aim to remove in the alignment process. Accordingly, this truncation is also applied to the harmonic correlation matrix M(s1,s2) prior to its orthogonalization. Finally, we note that this construction is equivalent to the diffusion map, albeit using a slightly different derivation of a discretized heat kernel (popular, for example, in graph signal processing works such as (Shuman et al., 2016)), with the parameter t again serving an analogous purpose to diffusion time.
3 EMPIRICAL RESULTS
3.1 ROTATIONAL ALIGNMENT
As a proof of principle we first demonstrate harmonic alignment of two circular manifolds. To generate these manifolds, we rotated two different MNIST examples of the digit ’3’ 360 degrees and sampled a point for each degree (See figure 1a). As we noted in section 2.2, the manifold coordinates obtained by diffusion maps are invariant to the phase of the data. In this example it is clear that each ’3’ manifold is out of phase with the other.
Figure 1b demonstrates the simple rotation that is learned by harmonic alignment between the two embeddings. On the left side, we see the out-of-phase embeddings. Taking nearest neighbors in this space illustrates the disconnection between the embeddings: nearest neighbors are only formed for within-sample points. After alignment, however, we see that the embeddings are in phase with each other because nearest neighbors in the aligned space span both samples and are in the same orientation with each other.
3.2 FEATURE CORRUPTION
Next, we assessed the ability of harmonic alignment in recovering k-neighborhoods after random feature corruption (figure 2).
To do this, we drew random samples from MNIST X(1) and X(2) of N (1) = N (2) = 1000. Then, for each trial in this experiment we drew 7842 samples from a unit normal distribution to create a 784 × 784 random matrix. We orthogonalized this matrix to yield the corruption matrix O0. To vary the amount of feature corruption, we then randomly substituted b0.01 ∗ p ∗ 784c columns from I to make Op. Right multiplication of X(2) by this matrix yielded corrupted images with only p% preserved pixels (figure 2b, ’corrupted’). To assess the recovery of k-neighborhoods, we then performed a lazy classification on X(2)Op by only using the labels of its neighbors in X(1). The results of this experiment, performed for p = {0, 5, 10, . . . 95, 100} are reported in figure 2a. For robustness, at each p we sampled three different non-overlapping pairs X(1) and X(2) and for each pair we sampled three Op matrices each with random identity columns, for a total of nine trials per p.
In general, unaligned, mutual nearest neighbors (MNN), and harmonic alignment with any filter set cannot recover k-neighborhoods under total corruption; 10% serves as a baseline accuracy that results from the rotation having 10% overlap in the manifold space. On the other hand, for small filter
choices we observe that harmonic alignment quickly recovers ≥ 80% accuracy and outperforms MNN and unaligned classifications consistently except under high correspondence.
Next we examined the ability of harmonic alignment to reconstruct corrupted data (figure 2b). We performed the same corruption procedure as before with p = 25 and selected 10 examples of each digit in MNIST. We show the ground truth fromX(2) and the corrupted resultX(2)O25 in figure 2b. Then, a reconstruction was performed by setting each pixel in a new image to the dominant class average of the ten nearest X(1) neighbors. In the unaligned case we see that most examples turn into smeared fives or ones; this is likely the intersection formed by X(1) and X(2)O25 that accounts for the 10% baseline accuracy in figure 2a. On the other hand, the reconstructions produced by harmonic alignment resemble their original input examples.
3.3 COMPARISONS
In figure 3 we compare the runtime, k-nn accuracy, and transfer learning capabilities of our method with two other contemporary alignment methods. First, we examine the unsupervised algorithm proposed by Wang & Mahadevan (2009) for generating weight matrices between two different samples. The algorithm first creates a local distance matrix of size k around each point and its four nearest neighbors. Then it computes an optimal match between k-nn distance matrices of each pair of points in X(1) and X(2) by comparing all k! permutations of the k-nn matrices and computing the minimal frobenius norm between such permuted matrices. We report runtime results for k = 5, as k = 10 failed to complete after running for 8 hours. Because the Wang&Mahadevan (2009) method merely gives a weight matrix that can be used with separate algorithms for computing the final features, we report accuracy results using their implementation. Regardless of input size, we were unable to recover k-neighborhoods for datasets with 35% uncorrupted columns (figure 3b) despite the method’s computational cost (figure 3a).
A more scalable approach to manifold alignment has emerged recently in the computational biology (Haghverdi et al., 2018) literature. This approach uses mutual nearest neighbor (MNN) matching to
compute mapping between datasets based on the assumption that if two points are truly neighbors they will resemble each other in both datasets. Because this approach amounts to building a knearest neighbors graph for each dataset and then choosing a set of neighbors between each dataset, MNN scales comparably to our method (figure 3a. Additionally, MNN is able to recover 20-30% of k-neighborhoods when only 35% of features match (figure 3b); this is an improvement over Wang but is substantially lower than what harmonic alignment achieves. We note that the performance of harmonic alignment was correlated with input size whereas MNN did not improve with more points.
3.4 TRANSFER LEARNING
An interesting use of manifold alignment algorithms is transfer learning. In this setting, an algorithm is trained to perform well on one dataset, and the goal is to extend the algorithm to the other dataset after alignment. In figure 3c we explore the utility of harmonic alignment in transfer learning and compare it to MNN and the method proposed by Wang & Mahadevan (2009).
In this experiment, we first draw 1000 uncorrupted examples of MNIST digits, and construct a diffusion map to use as our training set. Next, we took 65% corrupted unlabeled points (see section 3.2) in batches of 1, 000, 2, 000, 4, 000, 8, 000 as a test set to perform lazy classification on using the labels from the uncorrupted examples. At 1:8 test:training sample sizes, Harmonic alignment outperformed Wang and MNN by aligning upto 60% correct k-neighborhoods. In addition to showing the use of manifold alignment in transfer learning, this demonstrates the robustness of our algorithm to dataset imbalances.
3.5 BIOLOGICAL DATA
To illustrate the need for robust manifold alignment in computational biology, we turn to a simple real-world example obtained from Amodio et al. (2018) (figure 4). This dataset was collected by mass cytometry (CyTOF) of peripheral blood mononuclear cells (PBMC) from patients who contracted dengue fever. Subsequently, the Montgomery lab at Yale University experimentally introduced these PBMCs to Zika virus strains.
The canonical response to dengue infection is upregulation of interferon gamma (IFNγ) (Chesler & Reiss, 2002; Chakravarti & Kumaria, 2006; Braga et al., 2001). During early immune response, IFNγ works in tandem with acute phase cytokines such as tumor necrosis factor α to induce febrile response and inhibit viral replication (Ohmori et al., 1997). In the PBMC dataset, we thus expect to see upregulation of these two cytokines together, which we explore in 4.
In figure 4a, we show the relationship between IFNγ and TNFα without denoising. Note that there is a substantial difference between the IFNγ distributions of sample 1 and sample 2 (Earth Mover’s Distance (EMD) = 2.699). In order to identify meaningful relationships in CyTOF data, it is common to denoise it first. We used a graph filter to denoise the cytokine data (Van Dijk et al., 2018). The results of this denoising are shown in figure 4b. This procedure introduced more technical artifacts by enhancing the difference between batches, as seen by the increased EMD (3.127) between the IFNγ distributions of both patients. This is likely due to a substantial connectivity difference between the two batch subgraphs in the total graph.
Next, we performed harmonic alignment of the two patient profiles. We show the results of this in figure 4c. Harmonic alignment corrected the difference between IFNγ distributions and restored the canonical correlation of IFNγ and TNFα. This example illustrates the utlity of harmonic alignment for biological data, where it can be used for integrated analysis of data collected across different experiments, patients, and time points.
4 CONCLUSION
We presented a novel method for aligning or batch-normalizing two datasets that involves learning and aligning their intrinsic manifold dimensions. Our method leverages the fact that common or corresponding features in the two datasets should have similar harmonics on the graph of the data. Our harmonic alignment method finds an isometric transformation that maximizes the similarity of frequency harmonics of common features. Results show that our method successfully aligns artifi-
cially misaligned as well as biological data containing batch effect. Our method has the advantages that it aligns manifold geometry and not density (and thus is insensitive to sampling differences in data) and further our method denoises the datasets to obtain alignments of significant manifold dimensions rather than noise. Future applications of harmonic alignment can include integration of data from different measurement types performed on the same system, where features have known correlations. | 1. What is the main contribution of the paper regarding manifold alignment?
2. What are the strengths and weaknesses of the proposed approach, particularly in its technical details and numerical results?
3. Do you have any questions or concerns about the paper's clarity, such as grammar errors, confusing notation, and typos?
4. How does the reviewer assess the quality of the alignment for the toy image datasets and the biological dataset?
5. Are there any limitations or areas for improvement in the paper's methodology, such as the embedding E, filters, and manifold learning application in the biological data example? | Review | Review
The paper proposes an alignment of two manifolds that is performed in a low-dimensional parameter space corresponding to a low-pass "filtering" of the Graph Fourier transform obtained from the underlying data graphs for the two manifolds. The numerical results show the quality of the alignment for some toy image datasets and a biological dataset.
The derivation of the technical details of the approach is not clear - see the comments below on Pages 5,6 and 9 in particular. The paper is not clear enough for acceptance at this point.
Detailed comments:
Page 2: Grammar error "that is invariant batch effects". When denoising is discussed, can you explain whether this is denoising or simply regularization? When is the selected subspace a good approximation for the "signal subspace"?
Page 3: Should X^(S) be X^(s)? When W and W(s) are defined, do they also rely on a neighborhood graph? It appears that in the definition of psi_j the eigenvectors phi_j should be obtained from W, not P (which is how they are defined earlier in the page).
Page 4: There is an abuse of notation on f, used both as a linear function on X(s) and an element of X(s).
Page 5: Typos "exlpained", "along the along the". It is not clear what applying a window to eigenvalues means, or what the notation g_xi(lambda) means. The construction of the filters described here needs to be more explicit. h_xi is undefined. How is H in (1) defined when i = 1?
Page 6: M should be M(s1,s2). Typesetting error in Lambdabar(s). Which matrix is referred to in "the laplacian eigenvalues of each view"? What is the source and target of the embedding E? How is the embedding applied to data x(s1), x(s2)?
Page 7: Figure 1a appears to have an error in the orientation of one of the blue "3"s. The text on the arrow between the manifold embeddings does not agree with the notation in the paper. In Figure 1b, it is not clear which image is the original point and which images are the neighbors, or why some images are smaller than others. Results for the other algorithms are missing (why no comparison?). Typo "Wang&Mahadevan". Can you be more specific as to why that algorithm was "unable to recover k-neighborhoods" in certain cases?
Page 8: Why no comparison with Wang & Mahadevan in Figure 2?
Page 9: There is little description as to how manifold learning is applied in the biological data example. What is the ambient dimensionality and the dimension of the manifolds? How are the "abundances" extracted from the data?
"Which we explore in 4" -> "Which we explore in Fig. 4" |
ICLR | Title
Function-space regularized Rényi divergences
Abstract
We propose a new family of regularized Rényi divergences parametrized not only by the order α but also by a variational function space. These new objects are defined by taking the infimal convolution of the standard Rényi divergence with the integral probability metric (IPM) associated with the chosen function space. We derive a novel dual variational representation that can be used to construct numerically tractable divergence estimators. This representation avoids risk-sensitive terms and therefore exhibits lower variance, making it well-behaved when α > 1; this addresses a notable weakness of prior approaches. We prove several properties of these new divergences, showing that they interpolate between the classical Rényi divergences and IPMs. We also study the α→ ∞ limit, which leads to a regularized worst-case-regret and a new variational representation in the classical case. Moreover, we show that the proposed regularized Rényi divergences inherit features from IPMs such as the ability to compare distributions that are not absolutely continuous, e.g., empirical measures and distributions with lowdimensional support. We present numerical results on both synthetic and real datasets, showing the utility of these new divergences in both estimation and GAN training applications; in particular, we demonstrate significantly reduced variance and improved training performance.
1 INTRODUCTION
Rényi divergence, Rényi (1961), is a significant extension of Kullback-Leibler (KL) divergence for numerous applications; see, e.g., Van Erven & Harremos (2014). The recent neural-based estimators for divergences Belghazi et al. (2018) along with generative adversarial networks (GANs) Goodfellow et al. (2014) accelerated the use of divergences in the field of deep learning. The neural-based divergence estimators are feasible through the utilization of variational representation formulas. These formulas are essentially lower bounds (and, occasionally, upper bounds) which are approximated by tractable statistical averages. The estimation of a divergence based on variational formulas is a notoriously difficult problem. Challenges include potentially high bias that may require an exponential number of samples McAllester & Stratos (2020) or the exponential statistical variance for certain variational estimators Song & Ermon (2019), rendering divergence estimation both data inefficient and computationally expensive. This is especially prominent for Rényi divergences with order larger than 1. Indeed, numerical simulations have shown that, unless the distributions P and Q are very close to one another, the Rényi divergence Rα(P∥Q) is almost intractable to estimate when α > 1 due to the high variance of the statistically-approximated risk-sensitive observables Birrell et al. (2021), see also the recent analysis in Lee & Shin (2022). A similar issue has also been observed for the KL divergence, Song & Ermon (2019). Overall, the lack of estimators with low variance for Rényi divergences has prevented wide-spread and accessible experimentation with this class of information-theoretic tools, except in very special cases. We hope our results here will provide a suitable set of tools to address this gap in the methodology.
One approach to variance reduction is the development of new variational formulas. This direction is especially fruitful for the estimation of mutual information van den Oord et al. (2018); Cheng et al. (2020). Another approach is to regularize the divergence by restricting the function space of the variational formula. Indeed, instead of directly attacking the variance issue, the function space of the variational formula can be restricted, for instance, by bounding the test functions or more appropriately by bounding the derivative of the test functions. The latter regularization leads to
Lipschitz continuous function spaces which are also foundational to integral probability metrics (IPMs) and more specifically to the duality property of the Wasserstein metric. In this paper we combine the above two approaches, first deriving a new variational representation of the classical Rényi divergences and then regularizing via an infimal-convolution as follows
RΓ,ICα (P∥Q) := inf η {Rα(P∥η) +WΓ(Q, η)} , (1)
where P and Q are the probability distributions being compared, the infimum is over the space of probability measures, Rα is the classical Rényi divergence, and WΓ is the IPM corresponding to the chosen regularizing function space, Γ.
The new family of regularized Rényi divergences that are developed here address the risk-sensitivity issue inherent in prior approaches. More specifically, our contributions are as follows.
• We define a new family of function-space regularized Rényi divergences via the infimal convolution operator between the classical Rényi divergence and an arbitrary IPM (1). The new regularized Rényi divergences inherit their function space from the IPM. For instance, they inherit mass transport properties when one regularizes using the 1-Wasserstein metric.
• We derive a dual variational representation (11) of the regularized Rényi divergences which avoids risk-sensitive terms and can therefore be used to construct lower-variance statistical estimators.
• We prove a series of properties for the new object: (a) the divergence property, (b) being bounded by the minimum of the Rényi divergence and IPM, thus allowing for the comparison of non-absolutely continuous distributions, (c) limits as α→ 1 from both left and right, (d) regimes in which the limiting cases Rα(P∥Q) and WΓ(Q,P ) are recovered.
• We propose a rescaled version of the regularized Rényi divergences (16) which lead to a new variational formula for the worst-case regret (i.e., α→ ∞). This new variational formula does not involve the essential supremum of the density ratio as in the classical definition of worst-case regret, thereby avoiding risk-sensitive terms.
• We present a series of illustrative examples and counterexamples that further motivate the proposed definition for the function-space regularized Rényi divergences.
• We present numerical experiments that show (a) that we can estimate the new divergence for large values of the order α without variance issues and (b) train GANs using regularized function spaces.
Related work. The order of Rényi divergence controls the weight put on the tails, with the limiting cases being mode-covering and mode-selection Minka (2005). Rényi divergence estimation is used in a number of applications, including Sajid et al. (2022) (behavioural sciences), Mironov (2017) (differential privacy), and Li & Turner (2016) (variational inference); in the latter the variational formula is an adaptation of the evidence lower bound. Rényi divergences have been also applied in the training of GANs Bhatia et al. (2021) (loss function for binary classification - discrete case) and in Pantazis et al. (2022) (continuous case, based on the Rényi-Donsker-Varahdan variational formula in Birrell et al. (2021)). Rényi divergences with α > 1 are also used in contrastive representation learning, Lee & Shin (2022), as well as in PAC-Bayesian Bounds, Bégin et al. (2016). In the context of uncertainty quantification and sensitivity analysis, Rényi divergences provide confidence bounds for rare events, Atar et al. (2015); Dupuis et al. (2020), with higher rarity corresponding to larger α.
Reducing the variance of divergence estimators through control of the function space have been recently proposed. In Song & Ermon (2019) an explicit bound to the output restricts the divergence values. A systematic theoretical framework on how to regularize through the function space has been developed in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) for the KL and f -divergences. Despite not covering the Rényi divergence, the theory in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) and particularly the infimal-convolution formulation clearly inspired the current work. However, adapting the infimal-convolution method to the Rényi divergence setting requires two new technical innovations: (a) We develop a new low-variance convex-conjugate variational formula for the classical Rényi divergence in Theorem 2.1 (see also Fig. 1), allowing us to apply infimalconvolution tools to develop the new Γ-Rényi divergences in Theorem 3.4. (b) We study the α→ ∞ limit of (a) to obtain a new low-variance variational representation of worst-case regret in Theorem 2.2 and study its Γ-regularization in Theorem 4.5.
2 NEW VARIATIONAL REPRESENTATIONS OF CLASSICAL RÉNYI DIVERGENCES
The Rényi divergence of order α ∈ (0, 1) ∪ (1,∞) between P and Q, denoted Rα(P∥Q), can be defined as follows: Let ν be a sigma-finite positive measure with dP = pdν and dQ = qdν. Then
Rα(P∥Q) := 1α(α−1) log [∫ q>0 pαq1−αdν ] if 0 < α < 1 or α > 1 and P ≪ Q ,
+∞ if α > 1 and P ̸≪ Q , (2)
where P ≪ Q denotes absolute continuity of P with respect to Q. There always exists such a ν (e.g., ν = P +Q) and one can show that the definition (2) does not depend on the choice of ν. The Rα provide a notion of ‘distance’ between P and Q in that they satisfy the divergence property, i.e., they are non-negative and equal zero iff Q = P . The limit of Rα as α approaches 1 or 0 equals the KL or reverse KL divergence respectively Van Erven & Harremos (2014).
An alternative representation of Rα, the so-called Rényi-Donsker-Varadhan variational formula, was derived from (2) in Birrell et al. (2021),
Rα(P∥Q) = sup ϕ∈Mb(Ω)
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , P,Q ∈ P(Ω) . (3)
Here (Ω,M) denotes a measurable space, Mb(Ω) the space of bounded measurable real-valued functions on Ω, and P(Ω) is the space of probability measures on Ω. By a change of variables argument this can be transformed into the following new variational representation; see Theorem A.2 in Appendix A for a proof. We call it the convex-conjugate Rényi variational formula (CC-Rényi). Theorem 2.1 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (4)
If (Ω,M) is a metric space with the Borel σ-algebra then (4) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
The representation (4) is of convex-conjugate type, which will be key in our development of functionspace regularized Rényi divergences. It is also of independent interest as it avoids risk-sensitive terms, unlike (3) which contains cumulant-generating-functions. This makes (4) better behaved in estimation problems, especially when α > 1; see the example in Section 6.1 below.
We also obtain a new variational formula for worst-case regret, as defined by Van Erven & Harremos (2014)
D∞(P∥Q) := lim α→∞ αRα(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (5)
In contrast to (5), which requires estimation of the likelihood ratio, the new variational formula (6) below avoids risk-sensitive terms. Theorem 2.2 (Worst-case Regret Variational Formula). Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (6)
If Ω is a metric space with the Borel σ-algebra then (6) holds with Mb(Ω) replaced by Cb(Ω).
See Theorem A.5 in Appendix A for a proof. Equation (6) is a new result of independent interest and will also be useful in our study of the α → ∞ limit of the function-space regularized Rényi divergences that we define in the next section. Remark 2.3. Alternative variational formulas for D∞ on a finite alphabet were derived in Kurri et al. (2022).
t
3 PRIMAL AND DUAL FORMULATIONS OF THE INFIMAL-CONVOLUTION Γ-RÉNYI DIVERGENCES
We are now ready to define the function-space regularized Rényi divergences and derive their key properties. In this section, X will denote a compact metric space, P(X) will denote the set of Borel probability measures onX , andC(X) will denote the space of continuous real-valued functions onX . We equip C(X) with the supremum norm and recall that the dual space of C(X) is C(X)∗ =M(X), the space of finite signed Borel measures on X (see the Riesz representation theorem, e.g., Theorem 7.17 in Folland (2013)). Definition 3.1. Given a test-function space Γ ⊂ C(X), we define the infimal-convolution Γ-Rényi divergence (i.e., IC-Γ-Rényi divergence) between P,Q ∈ P(X) by
RΓ,ICα (P∥Q) := inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} , α ∈ (0, 1) ∪ (1,∞) , (7)
where WΓ denotes the Γ-IPM
WΓ(µ, ν) := sup g∈Γ
{ ∫ gdµ− ∫ gdν} , µ, ν ∈M(X) . (8)
Remark 3.2. The classical Rényi divergence is convex in its second argument but not in its first when α > 1 Van Erven & Harremos (2014). This is the motivation for defining the IC-Γ-Rényi divergences via an infimal convolution in the second argument of Rα; convex analysis tools will be critical in deriving properties of RΓ,ICα below. For α ∈ (0, 1) one can use the identity Rα(P∥Q) = R1−α(Q∥P ) to rewrite (7) as an infimal convolution in the first argument.
The definition (7) can be thought of as a regularization of the classical Rényi divergence using the Γ-IPM. For computational purposes it is significantly more efficient to have a dual formulation, i.e., a representation of RΓ,ICα in terms of a supremum over a function space. To derive such a representation we begin with the variational formula for Rα from Theorem 2.1. If we define the convex mapping ΛPα : C(X) → (−∞,∞],
ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , (9)
then (4) from Theorem 2.1 can be written as a convex conjugate
Rα(P∥Q) = (ΛPα )∗[Q] := sup g∈C(X)
{ ∫ gdQ− ΛPα [g]} . (10)
One can then use Fenchel-Rockafellar duality to derive a dual formulation of the IC-Γ-Rényi divergences. To apply this theory we will need to work with spaces of test functions that satisfy the following admissibility properties. These properties are similar to those used in the construction of regularized KL and f -divergences in Dupuis, Paul & Mao, Yixiang (2022) and Birrell et al. (2022a). Definition 3.3. We will call Γ ⊂ C(X) admissible if it is convex and contains the constant functions. We will call an admissible Γ strictly admissible if there exists a P(X)-determining set Ψ ⊂ C(X) such that for all ψ ∈ Ψ there exists c ∈ R, ϵ > 0 such that c ± ϵψ ∈ Γ. Recall that Ψ being P(X)-determining means that for all Q,P ∈ P(X), if ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ then Q = P .
Putting the above pieces together one obtains the following variational representation. Theorem 3.4. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(11)
2. If (11) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (12)
3. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
4. If Γ is strictly admissible then RΓ,ICα has the divergence property.
See Theorem B.3 in Appendix B for detailed proofs of these results as well as several additional properties. We note that there are alternative strategies for proving the variational formula (11) which make different assumptions; further comments on this can be found in Remark B.4. Important examples of strictly admissible Γ include the following:
1. Γ = C(X), which leads to the classical Rényi-divergences.
2. Γ = Lip1(X), i.e. all 1-Lipschitz functions. This regularizes the Rényi divergences via the Wasserstein metric.
3. Γ = {c + g : c ∈ R, g ∈ C(X), |g| ≤ 1}. This regularizes the Rényi divergences via the total-variation metric.
4. Γ = {c+ g : c ∈ R, g ∈ Lip1(X), |g| ≤ 1}. This regularizes the Rényi divergences via the Dudley metric.
5. Γ = {c + g : c ∈ R, g ∈ Y : ∥g∥V ≤ 1}, the unit ball in a RKHS V ⊂ C(X). This regularizes the Rényi divergences via MMD.
In practice, uniform bounds can be implemented using an appropriately chosen final NN layer. Lipschitz bounds can be implemented using spectral normalization of neural networks Miyato et al. (2018), or using a soft gradient penalty Gulrajani et al. (2017). The function space Γ for structurepreserving GANs discussed in the Appendix is implemented using equivariant neural networks, Birrell et al. (2022b). If Γ is a ball in an RKHS space the implementation is carried out using the same tools used in, e.g., MMD distances and divergences, Gretton et al. (2012); Glaser et al. (2021).
The IC-Γ-Rényi divergences also satisfy a data processing inequality. See Theorem B.8 in Appendix B for a proof as well as details regarding the notation. Theorem 3.5 (Data Processing Inequality). Let α ∈ (0, 1) ∪ (1,∞), Q,P ∈ P(X), and K be a probability kernel from X to Y such that K[g] ∈ C(X) for all g ∈ C(X,Y ). If Γ ⊂ C(Y ) is admissible then RΓ,ICα (K[P ]∥K[Q]) ≤ R K[Γ],IC α (P∥Q). If Γ ⊂ C(X × Y ) is admissible then RΓ,ICα (P ⊗K∥Q⊗K) ≤ R K[Γ],IC α (P∥Q).
If K[Γ] is strictly contained in Γ then the bounds in Theorem 3.5 can be strictly tighter than the classical data processing inequality Van Erven & Harremos (2014). Data-processing inequalities are important for constructing symmetry-preserving GANs; see Birrell et al. (2022b) and Section D.1.
4 LIMITS, INTERPOLATIONS, AND REGULARIZED WORST-CASE REGRET
Next we use Theorem 3.4 to compute various limits of the IC-Γ-Rényi divergences. First we show that they interpolate between Rα and WΓ in the following sense (see Theorem B.5 for a proof). Theorem 4.1. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞).
1. limδ→0+ 1δR δΓ,IC α (P∥Q) =WΓ(Q,P ),
2. If Γ is strictly admissible then limL→∞RLΓ,ICα (P∥Q) = Rα(P∥Q).
Now we discuss the limiting behavior in α. These results generalize several properties of the classical Rényi divergences Van Erven & Harremos (2014). First we consider the α→ 1 limit; see Theorem B.6 for a proof. Theorem 4.2. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (13)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (14)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (15)
Remark 4.3. When Γ = C(X), changing variables to g = − exp(ϕ− 1) transforms (15) into the Legendre-transform variational formula for R(P∥Q); see equation (1) in Birrell et al. (2022c) with f(x) = x log(x). Eq. (14) is an infimal convolution of the reverse KL-divergence, as opposed to the results in Dupuis, Paul & Mao, Yixiang (2022) which apply to the (forward) KL-divergence.
Function-space regularized worst-case regret. Next we investigate the α → ∞ limit of the IC-Γ-Rényi divergences, which will lead to the function-space regularized worst-case regret. First recall that some authors use an alternative definition of the classical Rényi divergences, related to the one used in this paper by Dα(·∥·) := αRα(·∥·). This alternative definition has the useful property of being non-decreasing in α; see Van Erven & Harremos (2014). Appropriately rescaled, the IC-Γ-Rényi divergence also satisfies this property, leading to the following definition. Definition 4.4. For Γ ⊂ C(X), α ∈ (0, 1) ∪ (1,∞) and P,Q ∈ P(X) we define
DΓ,ICα (P∥Q) := αRΓ/α,ICα (P∥Q) . (16)
Note that αRΓ/α,ICα (P∥Q) is non-decreasing in α; see Lemma B.1 for a proof. We now show that the divergences DΓ,ICα are well behaved in the α → ∞ limit, generalizing (5). Taking this limit provides a definition of function-space regularized worst-case regret, along with the following dual variational representation. Theorem 4.5. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
DΓ,IC∞ (P∥Q) := lim α→∞ DΓ,ICα (P∥Q) = inf η∈P (X) {D∞(P∥η) +WΓ(Q, η)} (17)
= sup g∈Γ:g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (18)
We call DΓ,IC∞ the infimal-convolution Γ-worst-case regret (i.e., IC-Γ-WCR). The method of proof of Theorem 4.5 is similar to that of part (1) of Theorem 3.4; see Theorem B.7 in Appendix B for details. Theorem 4.5 suggests that DΓ,ICα is the appropriate α-scaling to use when α is large and we find this to be the case in practice; see the example in Section 6.3.1.
5 ANALYTICAL EXAMPLES AND COUNTEREXAMPLES
In this section we present several analytical examples and counterexamples that illustrate important properties of the IC-Γ-Rényi divergences and demonstrate weaknesses of other attempts to define regularized Rényi divergences. In particular, we show that other attempts at regularizing Rényi divergences fail to inherit important properties from the Γ-IPM. More details on the computations can be found in Appendix C
Infimal convolution and scaling limits: First we present a simple example that illustrates the infimal convolution formula and limiting properties from Sections 3 and 4. LetP = δ0,Qx,c = cδ0+(1−c)δx for c ∈ (0, 1), x > 0, and let Γ = Lip1. Then for L > 0 one can compute
RLΓ,ICα (P∥Qx,c) = (1− c)Lx , 0 < αLx < 1 α−1 − cLx+ α−1 log(αLx) , 1 ≤ αLx ≤ 1/c α−1 log(1/c) , αLx > 1/c . (19)
In particular, it is straightforward to show that RLΓ,ICα (P∥Qx,c) ≤ (1 − c)Lx = WLΓ(Qx,c, P ), limx→0+ R LΓ,IC α (P∥Qx,c) = limx→0+(1 − c)Lx = 0, and limL→∞RLΓ,ICα (P∥Qx,c) = α log(1/c) = Rα(P∥Qx,c). We can also rewrite this in terms of the solution to the infimal convolution problem and take the worst-case-regret scaling limit as follows
RLΓ,ICα (P∥Qx,c) = WLΓ(Qx,c, P ) , 0 < αLx < 1
Rα(P∥Qx,1/(αLx)) +WLΓ(Qx,c, Qx,1/(αLx)) , 1 ≤ αLx ≤ 1/c Rα(P∥Qx,c) , αLx > 1/c ,
lim α→∞
αRΓ/α,ICα (P∥Qx,c) = WΓ(Qx,c, P ) , 0 < x < 1
D∞(P∥Qx,1/x) +WΓ(Qx,c, Qx,1/x) , 1 ≤ x ≤ 1/c D∞(P∥Qx,c) , x > 1/c . (20)
Γ-Rényi-Donsker-Varadhan counterexample: As an alternative to Definition 3.1, one can attempt to regularize the Rényi divergences by restricting the test-function space in the variational representation (3), leading to the Γ-Rényi-Donsker-Varadhan (Γ-Rényi-DV) divergences
RΓ,DVα (P∥Q) := sup ϕ∈Γ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } . (21)
The bound log ∫ ecϕdP ≥ c ∫ ϕdP for all ϕ ∈ Γ, c ∈ R implies that RΓ,DVα ≤ WΓ for α ∈ (0, 1), making (21) a useful regularization of the Rényi divergences in this case; this utility was demonstrated in Pantazis et al. (2022), where it was used to construct GANs. However, estimators built from the representation (21) (i.e., replacing P and Q by empirical measures) are known to be numerically unstable when α > 1. Below we provide a counterexample showing that, unlike for the IC-Γ-Rényi divergences, RΓ,DVα ̸≤WΓ in general when α > 1. We conjecture that this is a key reason for the instability of Γ-Rényi-Donsker-Varadhan estimators when α > 1.
Let Px,c = cδ0 + (1 − c)δx, Q = δ0 for x > 0, c ∈ (0, 1) and ΓL = LipL. Then for α > 1 we have RΓL,DVα (Px,c∥Q) = 1α−1 log (c+ (1− c) exp((α− 1)Lx)) and W
ΓL(Px,c, Q) = (1− c)Lx. Using strict concavity of the logarithm one can then obtain the bound
RΓL,DVα (Px,c∥Q) > WΓL(Px,c, Q) . (22) This shows that, when α > 1, Γ-Rényi-DV violates the key property that allows the IC-Γ-Rényi divergences to inherit properties from the corresponding Γ-IPM. Another alternative is to begin with (3) and then reduce the test-function space to 1α log(Γ), where the logarithm is introduced to eliminate the exponential functions in (21). However, this definition also fails to provide an appropriate regularized Rényi divergence; in particular, it is incapable of meaningfully comparing Dirac distributions. See Appendix C.3 for details. These counterexamples lend further credence to our infimal-convolution based regularization approach (7).
6 NUMERICAL EXPERIMENTS
In this section we present numerical examples that demonstrate the use of the IC-Γ-Rényi divergences for both estimation and training of GANs (additional examples can be found in Appendix D). All of the divergences considered in this paper have a variational representation of the form D(P∥Q) = supg∈ΓH[g;P,Q] for some objective functional H; we use the corresponding estimator
D̂n(P∥Q) := sup θ∈Θ H[gθ;Pn, Qn] (23)
where Pn, Qn are n-sample empirical measures and gθ is a family of neural networks (NN) with parameters θ ∈ Θ. For Lipschitz function spaces we weaken the Lipschitz constraint to a soft 1-sided gradient penalty (see Section 4.1 of Birrell et al. (2022a)). Optimization is performed using the Adam optimizer Kingma & Ba (2014). For the infimal convolution divergences we enforce negativity of the test function (i.e., discriminators) using a final layer having one of the following forms: 1)−abs(x) or 2) −(1/(1− x)1x<0 + (1 + x)1x≥0). The latter, which we term poly-softplus, is C1 and decays like O(x−1) as x→ −∞.
6.1 VARIANCE OF RÉNYI ESTIMATORS
As a first example, we compare estimators of the classical Rényi divergences (i.e., without regularization) constructed from DV-Rényi (3) and CC-Rényi (4) in a simple case where the exact Rényi divergence is known. We let Q and P be 1000-dimensional Gaussians with equal variance and study Rα(P∥Q) as a function of the separation between their means. The results are shown in Figure 1. We see that the estimator based on the convex-conjugate Rényi variational formula 4 has smaller variance and mean-squared error (MSE) that the Rényi-Donsker-Varadhan variational formula 3, with the difference becoming very large when α≫ 1 or when P and Q are far apart (i.e., when µq is large). The Rényi-Donsker-Varadhan estimator only works well when µq and α are both not too large, but even in such cases the convex-conjugate Rényi estimator generally performs better. We conjecture that this difference is due to the presence of risk-sensitive terms in (3) which were eliminated in the new representation (4). We note that the NN for the convex-conjugate Rényi estimator used the poly-softplus final layer, as we found the −abs final layer to result in a significant percentage of failed runs (i.e., NaN outputs) but this issue did not arise when using poly-softplus. We do not show results for either DV-WCR or CC-WCR here as the exact divergence is infinite in this example.
6.2 DETECTION OF RARE SUB-POPULATIONS IN SINGLE-CELL BIOLOGICAL DATASETS
A critical task in cancer assessment is the detection of rare sub-populations subsumed in the overall population of cells. The advent of affordable flow and mass cytometry technologies that perform single cell measurements opens a new direction for the analysis and comparison of high-dimensional cell distributions Shahi et al. (2017) via divergence estimation. We consider single cell mass cytometry measurements on 16 bone marrow protein markers (d = 16) coming from healthy and disease individuals with acute myeloid leukemia Levine et al. (2015). Following Weber et al. (2019), we create two datasets: one with only healthy samples and another one with decreasing percentage of sick cells and compute several divergences. Considering the estimated divergence value as the score of a binary classifier, we compute the ROC curve and the respective area under the ROC curve (AUC) for any pair of sample distributions. More specifically, true negatives correspond to the divergence values between two healthy datasets while true positives correspond to the divergence between a healthy and a diseased dataset. Thus, the AUC is 1.0 when the divergence estimates are completely separable while AUC is 0.5 when they completely overlap. Table 1 reports the AUC values for the scaled IC-Γ-Rényi divergences (16), various levels of rarity and two sample sizes for the datasets. The best performance in the Rényi family is obtained for α = ∞ using the IC-Γ-WCR variational formula (18). IC-Γ-WCR also outperforms the Wasserstein distance of first order in both sample size regimes.
6.3 IC-Γ-RÉNYI GANS
Finally, we study a pair of GAN examples (the second example is presented in Appendix D). Here the goal is to learn a distribution P using a family of generator distribution Qψ ∼ hψ(X) where X is a noise source and hψ is a family of neural networks parametrized by ψ ∈ Ψ, i.e., the goal is to solve
inf ψ∈Ψ
D̂n(P∥Qψ) , (24)
where D̂n is a divergence estimator of the form (23). In particular, we will study the GANs constructed from the newly introduced IC-Γ-Rényi and IC-Γ-WCR GANs and compare them with Wasserstein GAN Gulrajani et al. (2017); Arjovsky et al. (2017).
6.3.1 CIFAR-10
In Figure 2 we demonstrate improved performance of the IC-Γ-Rényi and IC-Γ-WCR GANs, as compared to Wasserstein GAN with gradient penalty (WGAN-GP), on the CIFAR-10 dataset Krizhevsky et al. (2009). The IC GANs also outperform Rényi-DV GAN (21), as the latter is highly unstable when α > 1 and so the training generally encounters NaN after a small number of training epochs (hence we omit those results from the figure). We use the same ResNet neural network architecture as in (Gulrajani et al., 2017, Appendix F) and focus on evaluating the effect of different divergences. Here we let Γ be the set of 1-Lipschitz functions, implement via a gradient penalty. Note that DΓ,IC∞ performs significantly better than RΓ,ICα with large α, and the rescaled D Γ,IC α -GAN performs better that RΓ,ICα -GAN when α is large.
ACKNOWLEDGMENTS
The research of J.B., M.K. and L.R.-B. was partially supported by the Air Force Office of Scientific Research (AFOSR) under the grant FA9550-21-1-0354. The research of M. K. and L.R.-B. was partially supported by the National Science Foundation (NSF) under the grants DMS-2008970 and TRIPODS CISE-1934846. The research of P.D. was partially supported by the NSF under the grant DMS-1904992 and by the AFOSR under the grant FA-9550-21-1-0354. The work of Y.P. was partially supported by the Hellenic Foundation for Research and Innovation (HFRI) through the “Second Call for HFRI Research Projects to support Faculty Members and Researchers” under Project 4753. This work was performed in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative.
A DERIVATION OF VARIATIONAL FORMULAS FOR THE CLASSICAL RÉNYI DIVERGENCES
In this appendix we provide several variational formulas for the classical Rényi divergences, some of which are new. In the following we let (Ω,M) denote a measurable space, M(Ω) be the space of measurable real-valued functions on Ω, Mb(Ω) the subspace of bounded functions, and P(Ω) will be the space of probability measures on Ω.
First we recall the Rényi-Donsker-Varadhan variational formula derived in Birrell et al. (2021). This is a generalization of the Donsker-Varadhan variational representation of the KL divergence.
Theorem A.1 (Rényi-Donsker-Varadhan Variational Formula). Let P and Q be probability measures on (Ω,M) and α ∈ R, α ̸= 0, 1. Then for any set of functions, Φ, with Mb(Ω) ⊂ Φ ⊂ M(Ω) we have
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , (25)
where we interpret ∞−∞ ≡ −∞ and −∞+∞ ≡ −∞.
If in addition (Ω,M) is a metric space with the Borel σ-algebra then (25) holds for all Φ that
satisfy Lipb(Ω) ⊂ Φ ⊂ M(Ω), where Lipb(Ω) is the space of bounded Lipschitz functions on Ω (i.e., Lipschitz for any Lipschitz constant L ∈ (0,∞)).
Using Theorem A.1 we can derive a new variational representation that takes the form of a convex conjugate. Theorem A.2 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (26)
If (Ω,M) is a metric space with the Borel σ-algebra then (26) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
Proof. Let Φ = {α−1 log(−h) : h ∈ Mb(Ω), h < 0}. We have Mb(Ω) ⊂ Φ ⊂ M(Ω), hence Theorem A.1 implies
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } (27)
= sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ e(α−1)(α
−1 log(−h))dP − 1 α log
∫ eα(α −1 log(−h))dQ } = sup h∈Mb(Ω):h<0 { 1 α− 1 log ∫ |h|(α−1)/αdP − 1 α log ∫ (−h)dQ } .
Note that the second term is finite but the first term is possibly infinite when α ∈ (0, 1). Next use the identity
log(c) = inf z∈R
{z − 1 + ce−z} , c ∈ (0,∞) (28)
in the second term to write
Rα(P∥Q) = sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP − 1
α inf z∈R
{z − 1 + e−z ∫ (−h)dQ} } (29)
=sup z∈R sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP + z − 1
α + α−1e−z
∫ hdQ } .
For each z ∈ R make the change variables h = αezg, g ∈ Mb(Ω), g < 0 in the inner supremum to derive
Rα(P∥Q) = sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |αezg|(α−1)/αdP − z − 1
α + α−1e−z
∫ αezgdQ } (30)
=sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |g|(α−1)/αdP + (α−1(logα+ 1) + ∫ gdQ) } = sup g∈Mb(Ω):g<0 {∫ gdQ+ 1 α− 1 log ∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
This completes the proof of (26). The proof of the metric-space version in nearly identical.
Remark A.3. To reverse the above derivation and obtain (25) (with Φ = {ϕ ∈ M(Ω) : ϕ is bounded above}) from (26), change variables g 7→ −c exp(αϕ), ϕ ∈ Φ, c > 0 in (26) and then maximize over c. Corollary A.4. If X is a compact metric space with the Borel σ-algebra, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞) then Cb(X) = C(X) and so
Rα(P∥Q) = sup g∈C(X):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (31)
Next we derive a variational formula for the worst case regret, defined by
D∞(P∥Q) := lim α→∞ αRα(P∥Q) . (32)
Theorem A.5. Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (33)
If Ω is a metric space with the Borel σ-algebra then (33) holds with Mb(Ω) replaced by Cb(Ω).
Remark A.6. Note that on a compact metric space, the space of bounded continuous functions is the same as the space of all continuous functions.
Proof. Recall Van Erven & Harremos (2014)
D∞(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (34)
First suppose P ̸≪ Q. Then there exists a measurable set A with Q(A) = 0 and P (A) > 0. Let gn = −n1A − 1Ac . Then
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gndQ+ log ∫ |gn|dP + 1 (35)
=− nQ(A)−Q(Ac) + log(nP (A) + P (Ac)) + 1 = log(nP (A) + P (Ac)) → ∞ (36)
as n→ ∞. Therefore
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 = ∞ = D∞(P∥Q) . (37)
Now suppose P ≪ Q. Using the definition (32) along with Theorem A.2 and changing variables g = g̃/α we have
D∞(P∥Q) = lim α→∞ αRα(P∥Q) (38)
= lim α→∞
[ sup
g∈Mb(Ω):g<0
{∫ αgdQ+ α
α− 1 log
∫ |g|(α−1)/αdP } + (logα+ 1) ]
≥ lim α→∞
[∫ g̃dQ+ α
α− 1 log
∫ |g̃/α|(α−1)/αdP + (logα+ 1) ] = lim α→∞ [∫ g̃dQ+ α α− 1 log ∫ |g̃|(α−1)/αdP + 1
] = ∫ g̃dQ+ log ∫ |g̃|dP + 1 for all g̃ ∈ Mb(Ω), g̃ < 0.
Here we used the dominated convergence theorem to evaluate the limit. Hence, by maximizing over g̃ we obtain
D∞(P∥Q) ≥ sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (39)
To prove the reverse inequality, take any r ∈ (0, ess supP dP/dQ). By definition of the essential supremum we have P (dP/dQ > r) > 0. We also have the bound
P (dP/dQ > r) = ∫ 1dP/dQ>r dP
dQ dQ ≥
∫ 1dP/dQ>rrdQ = rQ(dP/dQ > r) . (40)
For c, ϵ > 0 define gc,ϵ = −c1dP/dQ>r − ϵ. These satisfy gc,ϵ ∈ Mb(Ω), gc,ϵ < 0 and so
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gc,ϵdQ+ log ∫ |gc,ϵ|dP + 1 (41)
=− cQ(dP/dQ > r)− ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 ≥− cP (dP/dQ > r)/r − ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 .
Letting ϵ→ 0+ we find
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥− cP (dP/dQ > r)/r + log(cP (dP/dQ > r)) + 1
(42)
for all c > 0. We have P (dP/dQ > r) > 0, hence by maximizing over c > 0 and changing variables to z = cP (dP/dQ > r) we obtain
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ sup
z>0 {−z/r + log(z) + 1} = log(r) . (43)
This holds for all r < ess supP dP/dQ, therefore we can take r ↗ ess supP dP/dQ and use (34) to conclude
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ log(ess supP dP/dQ) = D∞(P∥Q) . (44)
Combining this with (39) completes the proof of (33).
Now suppose Ω is a metric space. We clearly have
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 (45)
≥ sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 .
To prove the reverse inequality, take any g ∈ Mb(Ω) with g < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ and hϵ ∈ Cb(Ω) such that P (Ecϵ ) ≤ ϵ, Q(Ecϵ ) ≤ ϵ, hϵ|Eϵ = g, and inf g ≤ hϵ ≤ 0. Define gϵ = hϵ − ϵ. Then gϵ < 0, gϵ ∈ Cb(Ω) and we have
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gϵdQ+ log ∫ |gϵ|dP (46)
= ∫ gdQ+ ∫ (hϵ − g)1Ecϵ dQ− ϵ+ log( ∫ |g|dP + ∫ (|hϵ| − |g|)1Ecϵ dP + ϵ)
≥ ∫ gdQ− (sup g − inf g)Q(Ecϵ )− ϵ+ log( ∫ |g|dP + inf gP (Ecϵ ) + ϵ)
≥ ∫ gdQ− (sup g − inf g)ϵ− ϵ+ log( ∫ |g|dP + inf gϵ+ ϵ) .
Taking the limit ϵ→ 0+ we therefore obtain
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gdQ+ log ∫ |g|dP . (47)
This holds for all g ∈ Mb(Ω) with g < 0, hence by taking the supremum over g we obtain the reverse inequality to (45). This completes the proof.
B PROOFS
In this appendix we provide a number of proofs that were omitted from the main text. Recall that X denotes a compact metric space.
Lemma B.1. Let Γ ⊂ C(X) and P,Q ∈ P(X). Then αRΓ/α,ICα (P∥Q) is non-decreasing in α ∈ (0, 1) ∪ (1,∞). If 0 ∈ Γ then αRΓ,ICα (P∥Q) is also non-decreasing.
Proof. If 0 ∈ Γ then WΓ ≥ 0, hence
αRΓ,ICα (P∥Q) = inf η∈P(X) {αRα(P∥η) + αWΓ(Q, η)} (48)
where both α 7→ αRα(P∥η) and α 7→ αWΓ(Q, η) are non-decreasing. Therefore the infimum is as well. The proof for α 7→ αRΓ/α,ICα (P∥Q) is similar, though it doesn’t require the assumption 0 ∈ Γ due to the identity αWΓ/α =WΓ.
Next we prove a key lemma that is used in our main result. First recall the definition ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , g ∈ C(X). (49)
Lemma B.2. ΛPα is convex and is continuous on {g ∈ C(X) : g < 0}, an open subset of C(X).
Proof. First we prove convexity. Let g0, g1 ∈ {C(X) : g < 0} and λ ∈ (0, 1). For α ∈ (0, 1) we can use the inequality λa+ (1− λ)b ≥ aλb1−λ for all a, b > 0 to compute
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP ≤ − 1
α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP . (50)
Using Hölder’s inequality with exponents p = 1/λ, q = 1/(1− λ) we then obtain
− 1 α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP (51)
≤− 1 α− 1 log
(∫ |g1|(α−1)/αdPλ ∫ |g0|(α−1)/αdP 1−λ ) =λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
Therefore g 7→ − 1α−1 log ∫ |g|(α−1)/αdP is convex on {g < 0}. This proves ΛPα is convex when α ∈ (0, 1).
Now suppose α > 1. The map t > 0, t 7→ t(α−1)/α is concave and − log is decreasing and convex, hence
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP (52)
≤− 1 α− 1 log
( λ ∫ |g1|(α−1)/αdP + (1− λ) ∫ |g0|(α−1)/αdP ) ≤λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
This proves that ΛPα is also convex when α > 1. Openness of {g < 0} follows from the assumption that X is compact and so any strictly negative continuous function is strictly bounded away from zero. Continuity on {g < 0} then follows from the dominated convergence theorem.
Now we prove our main theorem, deriving the dual variational formula and other important properties of the IC-Γ-Rényi divergences. Theorem B.3. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(53)
2. If (53) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (54)
3. RΓ,ICα (P∥Q) is convex in Q. If α ∈ (0, 1) then RΓ,ICα (P∥Q) is jointly convex in (P,Q).
4. (P,Q) 7→ RΓ,ICα (P∥Q) is lower semicontinuous.
5. RΓ,ICα (P∥Q) ≥ 0 with equality if P = Q.
6. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
7. If Γ is strictly admissible then RΓ,ICα has the divergence property.
Proof. 1. Define F,G : C(X) → (−∞,∞] by F = ΛPα and G[g] = ∞1g ̸∈Γ −EQ[g]. Using the assumptions on Γ along with Lemma B.2 we see that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore Fenchel-Rockafellar duality (see, e.g., Theorem 4.4.3 in Borwein & Zhu (2006)) along with the identity C(X)∗ =M(X) gives
sup g∈C(X) {−F [g]−G[g]} = inf η∈M(X)
{F ∗[η] +G∗[−η]} , (55)
and if either side is finite then the infimum on the right hand side is achieved at some η∗ ∈M(X). Using the definitions, we can rewrite the left hand side as follow
sup g∈C(X)
{−F [g]−G[g]} (56)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
We can also compute
G∗[−η] = sup g∈C(X)
{− ∫ gdη − (∞1g ̸∈Γ − EQ[g])} =WΓ(Q, η) . (57)
Therefore
inf η∈M(X)
{(ΛPα )∗[η] +WΓ(Q, η)} (58)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
Next we show that the infimum over M(X) can be restricted to P(X). First suppose η ∈ M(X) with η(X) ̸= 1. Then, using the assumption that Γ contains the constant functions, we have
WΓ(Q, η) ≥ EQ[±n]− ∫ ±ndη = ±n(1− η(X)) → ∞ (59)
as n→ ∞ (for appropriate choice of sign). Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. This implies that the infimum can be restricted to {η ∈M(X) : η(X) = 1}. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function
gϵ ∈ C(X) such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0}, hence
(ΛPα ) ∗[η] ≥ ∫ gn,ϵdη + 1
α− 1 log
∫ |gn,ϵ|(α−1)/αdP + α−1(logα+ 1) (60)
=− nη(A) + nη(A ∩ Ecϵ )− n ∫ gϵ1Ecϵ dη − η(X)
+ 1
α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1)
≥n(|η(A)| − 2ϵ)− η(X) + 1 α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1) .
If α > 1 then log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and if α ∈ (0, 1) then log ∫ |ngϵ +
1|(α−1)/αdP ≤ 0. In either case we have 1α−1 log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and so
(ΛPα ) ∗[η] ≥n(|η(A)| − 2ϵ)− η(X) + α−1(logα+ 1) . (61)
By choosing ϵ < |η(A)|/2 and taking n → ∞ we see that (ΛPα )∗[η] = ∞ whenever η ∈ M(X) is not positive. Therefore the infimum can further be restricted to positive measures. Combining these results we find
sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) (62)
= inf η∈P(X)
{(ΛPα )∗[η] +WΓ(Q, η)} .
For η ∈ P(X), equation (10) implies (ΛPα )∗[η] = Rα(P∥η). This completes the proof.
2. The existence of a minimizer follows from Fenchel-Rockafellar duality; again, see Theorem 4.4.3 in Borwein & Zhu (2006).
3. This follows from (53) together with the fact that the supremum of convex functions is convex and y 7→ 1α−1 log(y) is convex when α ∈ (0, 1).
4. Compactness of X implies that g and |g|(α−1)/α are bounded and continuous whenever g ∈ Γ satisfies g < 0. Therefore Q→ ∫ gdQ and P → ∫ |g|(α−1)/αdP are continuous in
the weak topology on P(X). Therefore the objective functional in (53) is continuous in (P,Q). The supremum is therefore lower semicontinuous.
5. This easily follows from the definition (7).
6. Rα is a divergence, hence is non-negative. Γ contains the constant functions, henceWΓ ≥ 0. Therefore RΓ,ICα ≥ 0. If Q = P then 0 ≤ RΓ,ICα (P∥Q) ≤ Rα(P∥P ) +WΓ(P, P ) = 0, hence RΓ,ICα (P∥Q) = 0.
7. Suppose Γ is strictly admissible. Due to part 5 of this theorem, we only need to show that if RΓ,ICα (P∥Q) = 0 then P = Q. If RΓ,ICα (P∥Q) = 0 then part 2 implies there exists η∗ ∈ P(X) such that
0 = Rα(P∥η∗) +WΓ(Q, η∗) . (63)
Both terms are non-negative, hence Rα(P∥η∗) = 0 =WΓ(Q, η∗). Rα has the divergence property, hence η∗ = P . So WΓ(Q,P ) = 0. Therefore 0 ≥ ∫ gdQ− ∫ gdP for all g ∈ Γ. Let Ψ be as in the definition of strict admissibility and let ψ ∈ Ψ. There exists c ∈ R, ϵ > 0 such that c± ϵψ ∈ Γ and so 0 ≥ ±ϵ( ∫ ψdQ− ∫ ψdP ). Therefore ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ. Ψ is P(X)-determining, hence Q = P .
Remark B.4. The Fenchel-Rockafellar Theorem applies under two different sets of assumptions: the first assumes both mappings are lower semicontinuous (LSC) while the second applies when one mapping is continuous at a point where both are finite. The mapping ΛPα , as defined by (49) and
appearing in (10), is not LSC but it is continuous on its domain, hence we used the second version of Fenchel-Rockafellar in our proof of Theorem B.3. For α > 1 one could alternatively redefine ΛPα along the boundary of {g < 0} to make it LSC while still maintaining the relation (10) and thereby utilize the first version of Fenchel-Rockafellar. This alternative approach is also amenable to extending the theorem to non-compact spaces, using the methods from Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a). However, these methods do not apply to α ∈ (0, 1). With this in mind, in order to provide a simple unified treatment of all α ∈ (0, 1) ∪ (1,∞) we structured our proof around the second version of Fenchel-Rockafellar.
Despite the fact that ΛPα is not LSC, the Fenchel-Rockafellar Theorem does imply that convex duality holds at all points of continuity in the domain, i.e., one has
ΛPα [g] = sup η∈M(X)
{ ∫ gdη −Rα(P∥η)} for all g < 0 , (64)
but this duality formula doesn’t necessarily hold if g ̸< 0. Here, Rα(P∥η) for general η ∈M(X) is defined via the variational formula
Rα(P∥η) := (ΛPα )∗[η] = sup g∈C(X)
{ ∫ gdη − ΛPα [g]} (65)
and one can rewrite this in terms of the classical Rényi divergence as follows
Rα(P∥η) = { ∞ if η ̸≥ 0 or η = 0 , Rα(P∥ η∥η∥ )− 1 α log ∥η∥ if η is a nonzero positive measure.
(66)
Next we prove the limiting results from Theorem 4.1. Theorem B.5. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then
lim δ→0+
1 δ RδΓ,ICα (P∥Q) =WΓ(Q,P ) (67)
and if Γ is strictly admissible we have
lim L→∞
RLΓ,ICα (P∥Q) = Rα(P∥Q) . (68)
Proof. It is straightforward to show that the scaled function spaces are admissible and W cΓ = cWΓ for all c > 0. First we prove (67). From the definition 7 we have
δ−1RδΓ,ICα (P∥Q) = inf η∈P(X) {δ−1Rα(P∥η) +WΓ(Q, η)} ≤WΓ(Q,P ) (69)
and so δ−1RδΓ,ICα (P∥Q) is non-increasing in δ. Therefore
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = sup δ>0 δ−1RδΓ,ICα (P∥Q) (70)
and
lim δ→0+
δ−1RδΓ,ICα (P∥Q) ≤WΓ(Q,P ) . (71)
We will assume this inequality is strict and derive a contradiction. This assumption, together with (70), implies RδΓ,ICα (P∥Q) <∞ for all δ > 0. Part (2) of Theorem 3.4 then implies the existence of η∗,δ ∈ P(X) such that
δ−1RδΓ,ICα (P∥Q) = δ−1Rα(P∥η∗,δ) +WΓ(Q, η∗,δ) ≥WΓ(Q, η∗,δ) . (72) Take a sequence δn → 0+. We have assumed X is compact, hence P(X) is also compact and so there exists a weakly convergent subsequence η∗,δnj → η∗. From the variational formulas (25) and (8) we see that Rα(P∥·) and WΓ(Q, ·) are lower semicontinuous, hence lim infjWΓ(Q, η∗,δnj ) ≥ WΓ(Q, η∗) and
Rα(P∥η∗) ≤ lim inf j Rα(P∥η∗,δnj ) ≤ lim infj δnj (δ −1 nj Rα(P∥η∗,δnj ) +W Γ(Q, η∗,δnj )) (73)
= lim inf j
δnj (δ −1 nj R δnjΓ,IC α (P∥Q)) = 0 , (74)
where the last equality follows from the assumed strictness of the inequality (70). Therefore the divergence property for the classical Rényi divergences implies Rα(P∥η∗) = 0 and P = η∗. Combining the above results we obtain
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = lim j→∞ δ−1nj R δnjΓ,IC α (P∥Q) ≥ lim inf j WΓ(Q, η∗,δnj ) (75)
≥WΓ(Q, η∗) =WΓ(Q,P ) . This contradicts (71) and therefore we have proven the equality (67).
Now we assume Γ is strictly admissible and will prove (68) via similar reasoning. From the definition 7 we see that
RLΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) + LWΓ(Q, η)} ≤ Rα(P∥Q) (76)
and RLΓ,ICα is non-decreasing in L. Hence limL→∞R LΓ,IC α (P∥Q) = supL>0RLΓ,ICα (P∥Q) and
lim L→∞
RLΓ,ICα (P∥Q) ≤ Rα(P∥Q) . (77)
Suppose this inequality is strict. Then RLΓ,ICα (P∥Q) < ∞ for all L and we can use part (2) of Theorem 3.4 to conclude there exists η∗,L ∈ P(X) such that
RLΓ,ICα (P∥Q) = Rα(P∥η∗,L) + LWΓ(Q, η∗,L) . (78) Take Ln → ∞. Compactness of P(X) implies the existence of a weakly convergent subsequence η∗,j := η∗,Lnj → η∗ ∈ P(X). Lower semicontinuity of Rα(P∥·) and W
Γ(Q, ·) imply lim infj Rα(P∥η∗,j) ≥ Rα(P∥η∗) and
WΓ(Q, η∗) ≤ lim inf j WΓ(Q, η∗,j) = lim inf j L−1nj W LnjΓ(Q, η∗,j) (79)
≤ lim inf j
L−1nj R LnjΓ,IC α (P∥Q) = 0 ,
where the last equality follows from the assumed strictness of the inequality (77). Therefore WΓ(Q, η∗) = 0. Γ is strictly admissible, hence Q = η∗ (see the proof of part (7) of Theorem 3.4). Combining these results we see that
lim L→∞ RLΓ,ICα (P∥Q) = lim j R LnjΓ,IC α (P∥Q) = lim j (Rα(P∥η∗,Lnj ) + LnjW Γ(Q, η∗,Lnj )) (80)
≥ lim inf j Rα(P∥η∗,j) ≥ Rα(P∥η∗) = Rα(P∥Q) .
This contradicts the assumed strictness of the inequality (77) and hence (77) is an equality. This completes the proof.
Next we prove Theorem 4.2, regarding the α→ 1 limit of the IC-Γ-Rényi divergences. Theorem B.6. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (81)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (82)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (83)
Proof. Lemma B.1 implies α 7→ αRΓ,ICα (P∥Q) is non-decreasing on (1,∞), therefore lim α→1+ αRΓ,ICα (P∥Q) = inf α>1 αRΓ,ICα (P∥Q) (84)
= inf α>1 inf η∈P(X)
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) inf α>1
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) { lim α→1+ αRα(P∥η) +WΓ(Q, η)} .
From Van Erven & Harremos (2014) we have
lim α→1+ Rα(P∥η) = { R(P∥η) if ∃β > 1, Rβ(P∥η) <∞ ∞ otherwise (85)
and so we can conclude
lim α→1+ RΓ,ICα (P∥Q) = lim α→1+ αRΓ,ICα (P∥Q) = inf η∈P (X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} . (86)
This proves (81).
Now we compute the limit as α → 1−. Note that the limit exists due to the fact that α 7→ αRΓ,ICα (P∥Q) is non-decreasing. From the definition (7), for all η ∈ P(X) we have
lim α→1− RΓ,ICα (P∥Q) ≤ lim α→1− Rα(P∥η) +WΓ(Q, η) = R(P∥η) +WΓ(Q, η) . (87)
Here we used the fact that limα→1− Rα(P∥η) = R(P∥η) (see Van Erven & Harremos (2014)). Maximizing over η then gives
lim α→1− RΓ,ICα (P∥Q) ≤ inf η∈P(X) {R(P∥η) +WΓ(Q, η)} . (88)
To prove the reverse inequality, use part 1 of Theorem 3.4 to compute
lim α→1− RΓ,ICα (P∥Q) = lim α→1− αRΓ,ICα (P∥Q) (89)
= lim α→1− sup g∈Γ:g<0
{ α ∫ gdQ+ α
α− 1 log
∫ |g|(α−1)/αdP + logα+ 1 } ≥ ∫ gdQ+ lim
α→1− α α− 1 log
∫ |g|(α−1)/αdP + 1
= ∫ gdQ+ d
dy |y=0 log
∫ ey log |g|dP + 1
= ∫ gdQ+ ∫ log |g|dP + 1
for all g ∈ Γ, g < 0. Therefore, maximizing over g gives
lim α→1− RΓ,ICα (P∥Q) ≥ sup g∈Γ:g<0
{∫ gdQ+ ∫ log |g|dP } + 1 . (90)
We now use Fenchel-Rockafellar duality (Theorem 4.4.3 in Borwein & Zhu (2006)) to compute the dual variational representation of the right hand side of (90). Define F,G : C(X) → (−∞,∞] by F [g] = ∞1g ̸<0 − ∫ log |g|dP and G[g] = ∞1g ̸∈Γ − EQ[g]. It is stratightforward to show that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore
inf g∈C(X) {F [g] +G[g]} = sup η∈E∗
{−F ∗(−η)−G∗(η)} , (91)
i.e.
sup g∈Γ:g<0
{EQ[g] + ∫
log |g|dP} = inf η∈M(X)
{F ∗(η) +WΓ(Q, η)} , (92)
where F ∗(η) = supg∈C(X):g<0{ ∫ gdη + ∫ log |g|dP}. Now we show the infimum can be restricted to η ∈ P(X): If η(X) ̸= 1 then by taking g = ±n we find
WΓ(Q, η) ≥ n|Q(X)− η(X)| → ∞ (93)
as n→ ∞. Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function gϵ ∈ C(X)
such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0 | 1. What is the focus and contribution of the paper on regularized Renyi divergences?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its theoretical analysis and application to GANs?
3. Do you have any concerns about the clarity and reproducibility of the paper's content, such as notations, definitions, and explanations?
4. How does the reviewer assess the novelty and distinction of the paper compared to prior works, specifically Birrell et al, JMLR'2022a?
5. What are the questions raised by the reviewer regarding the choice of hyperparameters, computation methods, and the impact of different choices on the results? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose a new family of regularized Renyi divergences by infimal-convolution with integral probability metric (IPM). The authors derive its dual variational representation and provide several theoretical results about limits, interpolations, regularized worse-case regret. The authors evaluate the proposed divergence on both synthetic and real datasets, and its application on GANs.
Strengths And Weaknesses
(*) Strength:
The authors propose a new family of regularized Renyi divergences by infimal-convolution with IPM.
The authors derive several theory results for the proposed divergence
(*) Weaknesses:
It seems that some results in the submission appear in the Birrell et al, JMLR'2022a.
It is better to give definitions/explanations for some notations before using them.
It is unclear how to choose the hyperparameters of the proposed divergences (it seems that these parameters play important roles in applications)
Clarity, Quality, Novelty And Reproducibility
It is not easy to follow the ideas in the paper, some more explanations may be useful to improve the flows of the paper (i.e., some notations are given without definitions/explanations which makes readers hard to follow the proposed ideas). Overall, I think the proposed divergences and their properties are important. However, the authors may need to elaborate with more details to improve the flows of the papers and highlight the new contributions of the submission (and distinguish with results in the literature)
Some concerns are as follows:
Equ. (1) is for Renyi divergence and Equ. (2) is for Renyi-Donsker-Varadhan variational formula. Are these equations equivalent? (If not, the authors should use different notations for the Renyi divergence and its variational formula). It is better in case the authors elaborate the relations between (1) and (2)
In Equ. (1), the authors should describe the meaning of the notation (P << Q)
It seems that the definitions of infimal-convoluation \Gamma-Renyi divergence appears in Birrell et al, JMLR'2022a. Could the authors elaborate the relation between these divergences. Are they the same? or what are the key differences between them. It is also better in case the authors distinguish the relation between the submission and Birrell et al, JMLR'2022a, and emphasize the novelty of the submission.
How the authors choose the \alpha nd \Gamma space for the proposed divergence in applications? especially the \Gamma space? How does the choice of \Gamma affect the proposed method in experiments?
In Section 3, the authors give some important choices for \Gamma. Could the authors elaborate how to compute the proposed method for these important cases? |
ICLR | Title
Function-space regularized Rényi divergences
Abstract
We propose a new family of regularized Rényi divergences parametrized not only by the order α but also by a variational function space. These new objects are defined by taking the infimal convolution of the standard Rényi divergence with the integral probability metric (IPM) associated with the chosen function space. We derive a novel dual variational representation that can be used to construct numerically tractable divergence estimators. This representation avoids risk-sensitive terms and therefore exhibits lower variance, making it well-behaved when α > 1; this addresses a notable weakness of prior approaches. We prove several properties of these new divergences, showing that they interpolate between the classical Rényi divergences and IPMs. We also study the α→ ∞ limit, which leads to a regularized worst-case-regret and a new variational representation in the classical case. Moreover, we show that the proposed regularized Rényi divergences inherit features from IPMs such as the ability to compare distributions that are not absolutely continuous, e.g., empirical measures and distributions with lowdimensional support. We present numerical results on both synthetic and real datasets, showing the utility of these new divergences in both estimation and GAN training applications; in particular, we demonstrate significantly reduced variance and improved training performance.
1 INTRODUCTION
Rényi divergence, Rényi (1961), is a significant extension of Kullback-Leibler (KL) divergence for numerous applications; see, e.g., Van Erven & Harremos (2014). The recent neural-based estimators for divergences Belghazi et al. (2018) along with generative adversarial networks (GANs) Goodfellow et al. (2014) accelerated the use of divergences in the field of deep learning. The neural-based divergence estimators are feasible through the utilization of variational representation formulas. These formulas are essentially lower bounds (and, occasionally, upper bounds) which are approximated by tractable statistical averages. The estimation of a divergence based on variational formulas is a notoriously difficult problem. Challenges include potentially high bias that may require an exponential number of samples McAllester & Stratos (2020) or the exponential statistical variance for certain variational estimators Song & Ermon (2019), rendering divergence estimation both data inefficient and computationally expensive. This is especially prominent for Rényi divergences with order larger than 1. Indeed, numerical simulations have shown that, unless the distributions P and Q are very close to one another, the Rényi divergence Rα(P∥Q) is almost intractable to estimate when α > 1 due to the high variance of the statistically-approximated risk-sensitive observables Birrell et al. (2021), see also the recent analysis in Lee & Shin (2022). A similar issue has also been observed for the KL divergence, Song & Ermon (2019). Overall, the lack of estimators with low variance for Rényi divergences has prevented wide-spread and accessible experimentation with this class of information-theoretic tools, except in very special cases. We hope our results here will provide a suitable set of tools to address this gap in the methodology.
One approach to variance reduction is the development of new variational formulas. This direction is especially fruitful for the estimation of mutual information van den Oord et al. (2018); Cheng et al. (2020). Another approach is to regularize the divergence by restricting the function space of the variational formula. Indeed, instead of directly attacking the variance issue, the function space of the variational formula can be restricted, for instance, by bounding the test functions or more appropriately by bounding the derivative of the test functions. The latter regularization leads to
Lipschitz continuous function spaces which are also foundational to integral probability metrics (IPMs) and more specifically to the duality property of the Wasserstein metric. In this paper we combine the above two approaches, first deriving a new variational representation of the classical Rényi divergences and then regularizing via an infimal-convolution as follows
RΓ,ICα (P∥Q) := inf η {Rα(P∥η) +WΓ(Q, η)} , (1)
where P and Q are the probability distributions being compared, the infimum is over the space of probability measures, Rα is the classical Rényi divergence, and WΓ is the IPM corresponding to the chosen regularizing function space, Γ.
The new family of regularized Rényi divergences that are developed here address the risk-sensitivity issue inherent in prior approaches. More specifically, our contributions are as follows.
• We define a new family of function-space regularized Rényi divergences via the infimal convolution operator between the classical Rényi divergence and an arbitrary IPM (1). The new regularized Rényi divergences inherit their function space from the IPM. For instance, they inherit mass transport properties when one regularizes using the 1-Wasserstein metric.
• We derive a dual variational representation (11) of the regularized Rényi divergences which avoids risk-sensitive terms and can therefore be used to construct lower-variance statistical estimators.
• We prove a series of properties for the new object: (a) the divergence property, (b) being bounded by the minimum of the Rényi divergence and IPM, thus allowing for the comparison of non-absolutely continuous distributions, (c) limits as α→ 1 from both left and right, (d) regimes in which the limiting cases Rα(P∥Q) and WΓ(Q,P ) are recovered.
• We propose a rescaled version of the regularized Rényi divergences (16) which lead to a new variational formula for the worst-case regret (i.e., α→ ∞). This new variational formula does not involve the essential supremum of the density ratio as in the classical definition of worst-case regret, thereby avoiding risk-sensitive terms.
• We present a series of illustrative examples and counterexamples that further motivate the proposed definition for the function-space regularized Rényi divergences.
• We present numerical experiments that show (a) that we can estimate the new divergence for large values of the order α without variance issues and (b) train GANs using regularized function spaces.
Related work. The order of Rényi divergence controls the weight put on the tails, with the limiting cases being mode-covering and mode-selection Minka (2005). Rényi divergence estimation is used in a number of applications, including Sajid et al. (2022) (behavioural sciences), Mironov (2017) (differential privacy), and Li & Turner (2016) (variational inference); in the latter the variational formula is an adaptation of the evidence lower bound. Rényi divergences have been also applied in the training of GANs Bhatia et al. (2021) (loss function for binary classification - discrete case) and in Pantazis et al. (2022) (continuous case, based on the Rényi-Donsker-Varahdan variational formula in Birrell et al. (2021)). Rényi divergences with α > 1 are also used in contrastive representation learning, Lee & Shin (2022), as well as in PAC-Bayesian Bounds, Bégin et al. (2016). In the context of uncertainty quantification and sensitivity analysis, Rényi divergences provide confidence bounds for rare events, Atar et al. (2015); Dupuis et al. (2020), with higher rarity corresponding to larger α.
Reducing the variance of divergence estimators through control of the function space have been recently proposed. In Song & Ermon (2019) an explicit bound to the output restricts the divergence values. A systematic theoretical framework on how to regularize through the function space has been developed in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) for the KL and f -divergences. Despite not covering the Rényi divergence, the theory in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) and particularly the infimal-convolution formulation clearly inspired the current work. However, adapting the infimal-convolution method to the Rényi divergence setting requires two new technical innovations: (a) We develop a new low-variance convex-conjugate variational formula for the classical Rényi divergence in Theorem 2.1 (see also Fig. 1), allowing us to apply infimalconvolution tools to develop the new Γ-Rényi divergences in Theorem 3.4. (b) We study the α→ ∞ limit of (a) to obtain a new low-variance variational representation of worst-case regret in Theorem 2.2 and study its Γ-regularization in Theorem 4.5.
2 NEW VARIATIONAL REPRESENTATIONS OF CLASSICAL RÉNYI DIVERGENCES
The Rényi divergence of order α ∈ (0, 1) ∪ (1,∞) between P and Q, denoted Rα(P∥Q), can be defined as follows: Let ν be a sigma-finite positive measure with dP = pdν and dQ = qdν. Then
Rα(P∥Q) := 1α(α−1) log [∫ q>0 pαq1−αdν ] if 0 < α < 1 or α > 1 and P ≪ Q ,
+∞ if α > 1 and P ̸≪ Q , (2)
where P ≪ Q denotes absolute continuity of P with respect to Q. There always exists such a ν (e.g., ν = P +Q) and one can show that the definition (2) does not depend on the choice of ν. The Rα provide a notion of ‘distance’ between P and Q in that they satisfy the divergence property, i.e., they are non-negative and equal zero iff Q = P . The limit of Rα as α approaches 1 or 0 equals the KL or reverse KL divergence respectively Van Erven & Harremos (2014).
An alternative representation of Rα, the so-called Rényi-Donsker-Varadhan variational formula, was derived from (2) in Birrell et al. (2021),
Rα(P∥Q) = sup ϕ∈Mb(Ω)
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , P,Q ∈ P(Ω) . (3)
Here (Ω,M) denotes a measurable space, Mb(Ω) the space of bounded measurable real-valued functions on Ω, and P(Ω) is the space of probability measures on Ω. By a change of variables argument this can be transformed into the following new variational representation; see Theorem A.2 in Appendix A for a proof. We call it the convex-conjugate Rényi variational formula (CC-Rényi). Theorem 2.1 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (4)
If (Ω,M) is a metric space with the Borel σ-algebra then (4) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
The representation (4) is of convex-conjugate type, which will be key in our development of functionspace regularized Rényi divergences. It is also of independent interest as it avoids risk-sensitive terms, unlike (3) which contains cumulant-generating-functions. This makes (4) better behaved in estimation problems, especially when α > 1; see the example in Section 6.1 below.
We also obtain a new variational formula for worst-case regret, as defined by Van Erven & Harremos (2014)
D∞(P∥Q) := lim α→∞ αRα(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (5)
In contrast to (5), which requires estimation of the likelihood ratio, the new variational formula (6) below avoids risk-sensitive terms. Theorem 2.2 (Worst-case Regret Variational Formula). Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (6)
If Ω is a metric space with the Borel σ-algebra then (6) holds with Mb(Ω) replaced by Cb(Ω).
See Theorem A.5 in Appendix A for a proof. Equation (6) is a new result of independent interest and will also be useful in our study of the α → ∞ limit of the function-space regularized Rényi divergences that we define in the next section. Remark 2.3. Alternative variational formulas for D∞ on a finite alphabet were derived in Kurri et al. (2022).
t
3 PRIMAL AND DUAL FORMULATIONS OF THE INFIMAL-CONVOLUTION Γ-RÉNYI DIVERGENCES
We are now ready to define the function-space regularized Rényi divergences and derive their key properties. In this section, X will denote a compact metric space, P(X) will denote the set of Borel probability measures onX , andC(X) will denote the space of continuous real-valued functions onX . We equip C(X) with the supremum norm and recall that the dual space of C(X) is C(X)∗ =M(X), the space of finite signed Borel measures on X (see the Riesz representation theorem, e.g., Theorem 7.17 in Folland (2013)). Definition 3.1. Given a test-function space Γ ⊂ C(X), we define the infimal-convolution Γ-Rényi divergence (i.e., IC-Γ-Rényi divergence) between P,Q ∈ P(X) by
RΓ,ICα (P∥Q) := inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} , α ∈ (0, 1) ∪ (1,∞) , (7)
where WΓ denotes the Γ-IPM
WΓ(µ, ν) := sup g∈Γ
{ ∫ gdµ− ∫ gdν} , µ, ν ∈M(X) . (8)
Remark 3.2. The classical Rényi divergence is convex in its second argument but not in its first when α > 1 Van Erven & Harremos (2014). This is the motivation for defining the IC-Γ-Rényi divergences via an infimal convolution in the second argument of Rα; convex analysis tools will be critical in deriving properties of RΓ,ICα below. For α ∈ (0, 1) one can use the identity Rα(P∥Q) = R1−α(Q∥P ) to rewrite (7) as an infimal convolution in the first argument.
The definition (7) can be thought of as a regularization of the classical Rényi divergence using the Γ-IPM. For computational purposes it is significantly more efficient to have a dual formulation, i.e., a representation of RΓ,ICα in terms of a supremum over a function space. To derive such a representation we begin with the variational formula for Rα from Theorem 2.1. If we define the convex mapping ΛPα : C(X) → (−∞,∞],
ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , (9)
then (4) from Theorem 2.1 can be written as a convex conjugate
Rα(P∥Q) = (ΛPα )∗[Q] := sup g∈C(X)
{ ∫ gdQ− ΛPα [g]} . (10)
One can then use Fenchel-Rockafellar duality to derive a dual formulation of the IC-Γ-Rényi divergences. To apply this theory we will need to work with spaces of test functions that satisfy the following admissibility properties. These properties are similar to those used in the construction of regularized KL and f -divergences in Dupuis, Paul & Mao, Yixiang (2022) and Birrell et al. (2022a). Definition 3.3. We will call Γ ⊂ C(X) admissible if it is convex and contains the constant functions. We will call an admissible Γ strictly admissible if there exists a P(X)-determining set Ψ ⊂ C(X) such that for all ψ ∈ Ψ there exists c ∈ R, ϵ > 0 such that c ± ϵψ ∈ Γ. Recall that Ψ being P(X)-determining means that for all Q,P ∈ P(X), if ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ then Q = P .
Putting the above pieces together one obtains the following variational representation. Theorem 3.4. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(11)
2. If (11) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (12)
3. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
4. If Γ is strictly admissible then RΓ,ICα has the divergence property.
See Theorem B.3 in Appendix B for detailed proofs of these results as well as several additional properties. We note that there are alternative strategies for proving the variational formula (11) which make different assumptions; further comments on this can be found in Remark B.4. Important examples of strictly admissible Γ include the following:
1. Γ = C(X), which leads to the classical Rényi-divergences.
2. Γ = Lip1(X), i.e. all 1-Lipschitz functions. This regularizes the Rényi divergences via the Wasserstein metric.
3. Γ = {c + g : c ∈ R, g ∈ C(X), |g| ≤ 1}. This regularizes the Rényi divergences via the total-variation metric.
4. Γ = {c+ g : c ∈ R, g ∈ Lip1(X), |g| ≤ 1}. This regularizes the Rényi divergences via the Dudley metric.
5. Γ = {c + g : c ∈ R, g ∈ Y : ∥g∥V ≤ 1}, the unit ball in a RKHS V ⊂ C(X). This regularizes the Rényi divergences via MMD.
In practice, uniform bounds can be implemented using an appropriately chosen final NN layer. Lipschitz bounds can be implemented using spectral normalization of neural networks Miyato et al. (2018), or using a soft gradient penalty Gulrajani et al. (2017). The function space Γ for structurepreserving GANs discussed in the Appendix is implemented using equivariant neural networks, Birrell et al. (2022b). If Γ is a ball in an RKHS space the implementation is carried out using the same tools used in, e.g., MMD distances and divergences, Gretton et al. (2012); Glaser et al. (2021).
The IC-Γ-Rényi divergences also satisfy a data processing inequality. See Theorem B.8 in Appendix B for a proof as well as details regarding the notation. Theorem 3.5 (Data Processing Inequality). Let α ∈ (0, 1) ∪ (1,∞), Q,P ∈ P(X), and K be a probability kernel from X to Y such that K[g] ∈ C(X) for all g ∈ C(X,Y ). If Γ ⊂ C(Y ) is admissible then RΓ,ICα (K[P ]∥K[Q]) ≤ R K[Γ],IC α (P∥Q). If Γ ⊂ C(X × Y ) is admissible then RΓ,ICα (P ⊗K∥Q⊗K) ≤ R K[Γ],IC α (P∥Q).
If K[Γ] is strictly contained in Γ then the bounds in Theorem 3.5 can be strictly tighter than the classical data processing inequality Van Erven & Harremos (2014). Data-processing inequalities are important for constructing symmetry-preserving GANs; see Birrell et al. (2022b) and Section D.1.
4 LIMITS, INTERPOLATIONS, AND REGULARIZED WORST-CASE REGRET
Next we use Theorem 3.4 to compute various limits of the IC-Γ-Rényi divergences. First we show that they interpolate between Rα and WΓ in the following sense (see Theorem B.5 for a proof). Theorem 4.1. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞).
1. limδ→0+ 1δR δΓ,IC α (P∥Q) =WΓ(Q,P ),
2. If Γ is strictly admissible then limL→∞RLΓ,ICα (P∥Q) = Rα(P∥Q).
Now we discuss the limiting behavior in α. These results generalize several properties of the classical Rényi divergences Van Erven & Harremos (2014). First we consider the α→ 1 limit; see Theorem B.6 for a proof. Theorem 4.2. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (13)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (14)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (15)
Remark 4.3. When Γ = C(X), changing variables to g = − exp(ϕ− 1) transforms (15) into the Legendre-transform variational formula for R(P∥Q); see equation (1) in Birrell et al. (2022c) with f(x) = x log(x). Eq. (14) is an infimal convolution of the reverse KL-divergence, as opposed to the results in Dupuis, Paul & Mao, Yixiang (2022) which apply to the (forward) KL-divergence.
Function-space regularized worst-case regret. Next we investigate the α → ∞ limit of the IC-Γ-Rényi divergences, which will lead to the function-space regularized worst-case regret. First recall that some authors use an alternative definition of the classical Rényi divergences, related to the one used in this paper by Dα(·∥·) := αRα(·∥·). This alternative definition has the useful property of being non-decreasing in α; see Van Erven & Harremos (2014). Appropriately rescaled, the IC-Γ-Rényi divergence also satisfies this property, leading to the following definition. Definition 4.4. For Γ ⊂ C(X), α ∈ (0, 1) ∪ (1,∞) and P,Q ∈ P(X) we define
DΓ,ICα (P∥Q) := αRΓ/α,ICα (P∥Q) . (16)
Note that αRΓ/α,ICα (P∥Q) is non-decreasing in α; see Lemma B.1 for a proof. We now show that the divergences DΓ,ICα are well behaved in the α → ∞ limit, generalizing (5). Taking this limit provides a definition of function-space regularized worst-case regret, along with the following dual variational representation. Theorem 4.5. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
DΓ,IC∞ (P∥Q) := lim α→∞ DΓ,ICα (P∥Q) = inf η∈P (X) {D∞(P∥η) +WΓ(Q, η)} (17)
= sup g∈Γ:g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (18)
We call DΓ,IC∞ the infimal-convolution Γ-worst-case regret (i.e., IC-Γ-WCR). The method of proof of Theorem 4.5 is similar to that of part (1) of Theorem 3.4; see Theorem B.7 in Appendix B for details. Theorem 4.5 suggests that DΓ,ICα is the appropriate α-scaling to use when α is large and we find this to be the case in practice; see the example in Section 6.3.1.
5 ANALYTICAL EXAMPLES AND COUNTEREXAMPLES
In this section we present several analytical examples and counterexamples that illustrate important properties of the IC-Γ-Rényi divergences and demonstrate weaknesses of other attempts to define regularized Rényi divergences. In particular, we show that other attempts at regularizing Rényi divergences fail to inherit important properties from the Γ-IPM. More details on the computations can be found in Appendix C
Infimal convolution and scaling limits: First we present a simple example that illustrates the infimal convolution formula and limiting properties from Sections 3 and 4. LetP = δ0,Qx,c = cδ0+(1−c)δx for c ∈ (0, 1), x > 0, and let Γ = Lip1. Then for L > 0 one can compute
RLΓ,ICα (P∥Qx,c) = (1− c)Lx , 0 < αLx < 1 α−1 − cLx+ α−1 log(αLx) , 1 ≤ αLx ≤ 1/c α−1 log(1/c) , αLx > 1/c . (19)
In particular, it is straightforward to show that RLΓ,ICα (P∥Qx,c) ≤ (1 − c)Lx = WLΓ(Qx,c, P ), limx→0+ R LΓ,IC α (P∥Qx,c) = limx→0+(1 − c)Lx = 0, and limL→∞RLΓ,ICα (P∥Qx,c) = α log(1/c) = Rα(P∥Qx,c). We can also rewrite this in terms of the solution to the infimal convolution problem and take the worst-case-regret scaling limit as follows
RLΓ,ICα (P∥Qx,c) = WLΓ(Qx,c, P ) , 0 < αLx < 1
Rα(P∥Qx,1/(αLx)) +WLΓ(Qx,c, Qx,1/(αLx)) , 1 ≤ αLx ≤ 1/c Rα(P∥Qx,c) , αLx > 1/c ,
lim α→∞
αRΓ/α,ICα (P∥Qx,c) = WΓ(Qx,c, P ) , 0 < x < 1
D∞(P∥Qx,1/x) +WΓ(Qx,c, Qx,1/x) , 1 ≤ x ≤ 1/c D∞(P∥Qx,c) , x > 1/c . (20)
Γ-Rényi-Donsker-Varadhan counterexample: As an alternative to Definition 3.1, one can attempt to regularize the Rényi divergences by restricting the test-function space in the variational representation (3), leading to the Γ-Rényi-Donsker-Varadhan (Γ-Rényi-DV) divergences
RΓ,DVα (P∥Q) := sup ϕ∈Γ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } . (21)
The bound log ∫ ecϕdP ≥ c ∫ ϕdP for all ϕ ∈ Γ, c ∈ R implies that RΓ,DVα ≤ WΓ for α ∈ (0, 1), making (21) a useful regularization of the Rényi divergences in this case; this utility was demonstrated in Pantazis et al. (2022), where it was used to construct GANs. However, estimators built from the representation (21) (i.e., replacing P and Q by empirical measures) are known to be numerically unstable when α > 1. Below we provide a counterexample showing that, unlike for the IC-Γ-Rényi divergences, RΓ,DVα ̸≤WΓ in general when α > 1. We conjecture that this is a key reason for the instability of Γ-Rényi-Donsker-Varadhan estimators when α > 1.
Let Px,c = cδ0 + (1 − c)δx, Q = δ0 for x > 0, c ∈ (0, 1) and ΓL = LipL. Then for α > 1 we have RΓL,DVα (Px,c∥Q) = 1α−1 log (c+ (1− c) exp((α− 1)Lx)) and W
ΓL(Px,c, Q) = (1− c)Lx. Using strict concavity of the logarithm one can then obtain the bound
RΓL,DVα (Px,c∥Q) > WΓL(Px,c, Q) . (22) This shows that, when α > 1, Γ-Rényi-DV violates the key property that allows the IC-Γ-Rényi divergences to inherit properties from the corresponding Γ-IPM. Another alternative is to begin with (3) and then reduce the test-function space to 1α log(Γ), where the logarithm is introduced to eliminate the exponential functions in (21). However, this definition also fails to provide an appropriate regularized Rényi divergence; in particular, it is incapable of meaningfully comparing Dirac distributions. See Appendix C.3 for details. These counterexamples lend further credence to our infimal-convolution based regularization approach (7).
6 NUMERICAL EXPERIMENTS
In this section we present numerical examples that demonstrate the use of the IC-Γ-Rényi divergences for both estimation and training of GANs (additional examples can be found in Appendix D). All of the divergences considered in this paper have a variational representation of the form D(P∥Q) = supg∈ΓH[g;P,Q] for some objective functional H; we use the corresponding estimator
D̂n(P∥Q) := sup θ∈Θ H[gθ;Pn, Qn] (23)
where Pn, Qn are n-sample empirical measures and gθ is a family of neural networks (NN) with parameters θ ∈ Θ. For Lipschitz function spaces we weaken the Lipschitz constraint to a soft 1-sided gradient penalty (see Section 4.1 of Birrell et al. (2022a)). Optimization is performed using the Adam optimizer Kingma & Ba (2014). For the infimal convolution divergences we enforce negativity of the test function (i.e., discriminators) using a final layer having one of the following forms: 1)−abs(x) or 2) −(1/(1− x)1x<0 + (1 + x)1x≥0). The latter, which we term poly-softplus, is C1 and decays like O(x−1) as x→ −∞.
6.1 VARIANCE OF RÉNYI ESTIMATORS
As a first example, we compare estimators of the classical Rényi divergences (i.e., without regularization) constructed from DV-Rényi (3) and CC-Rényi (4) in a simple case where the exact Rényi divergence is known. We let Q and P be 1000-dimensional Gaussians with equal variance and study Rα(P∥Q) as a function of the separation between their means. The results are shown in Figure 1. We see that the estimator based on the convex-conjugate Rényi variational formula 4 has smaller variance and mean-squared error (MSE) that the Rényi-Donsker-Varadhan variational formula 3, with the difference becoming very large when α≫ 1 or when P and Q are far apart (i.e., when µq is large). The Rényi-Donsker-Varadhan estimator only works well when µq and α are both not too large, but even in such cases the convex-conjugate Rényi estimator generally performs better. We conjecture that this difference is due to the presence of risk-sensitive terms in (3) which were eliminated in the new representation (4). We note that the NN for the convex-conjugate Rényi estimator used the poly-softplus final layer, as we found the −abs final layer to result in a significant percentage of failed runs (i.e., NaN outputs) but this issue did not arise when using poly-softplus. We do not show results for either DV-WCR or CC-WCR here as the exact divergence is infinite in this example.
6.2 DETECTION OF RARE SUB-POPULATIONS IN SINGLE-CELL BIOLOGICAL DATASETS
A critical task in cancer assessment is the detection of rare sub-populations subsumed in the overall population of cells. The advent of affordable flow and mass cytometry technologies that perform single cell measurements opens a new direction for the analysis and comparison of high-dimensional cell distributions Shahi et al. (2017) via divergence estimation. We consider single cell mass cytometry measurements on 16 bone marrow protein markers (d = 16) coming from healthy and disease individuals with acute myeloid leukemia Levine et al. (2015). Following Weber et al. (2019), we create two datasets: one with only healthy samples and another one with decreasing percentage of sick cells and compute several divergences. Considering the estimated divergence value as the score of a binary classifier, we compute the ROC curve and the respective area under the ROC curve (AUC) for any pair of sample distributions. More specifically, true negatives correspond to the divergence values between two healthy datasets while true positives correspond to the divergence between a healthy and a diseased dataset. Thus, the AUC is 1.0 when the divergence estimates are completely separable while AUC is 0.5 when they completely overlap. Table 1 reports the AUC values for the scaled IC-Γ-Rényi divergences (16), various levels of rarity and two sample sizes for the datasets. The best performance in the Rényi family is obtained for α = ∞ using the IC-Γ-WCR variational formula (18). IC-Γ-WCR also outperforms the Wasserstein distance of first order in both sample size regimes.
6.3 IC-Γ-RÉNYI GANS
Finally, we study a pair of GAN examples (the second example is presented in Appendix D). Here the goal is to learn a distribution P using a family of generator distribution Qψ ∼ hψ(X) where X is a noise source and hψ is a family of neural networks parametrized by ψ ∈ Ψ, i.e., the goal is to solve
inf ψ∈Ψ
D̂n(P∥Qψ) , (24)
where D̂n is a divergence estimator of the form (23). In particular, we will study the GANs constructed from the newly introduced IC-Γ-Rényi and IC-Γ-WCR GANs and compare them with Wasserstein GAN Gulrajani et al. (2017); Arjovsky et al. (2017).
6.3.1 CIFAR-10
In Figure 2 we demonstrate improved performance of the IC-Γ-Rényi and IC-Γ-WCR GANs, as compared to Wasserstein GAN with gradient penalty (WGAN-GP), on the CIFAR-10 dataset Krizhevsky et al. (2009). The IC GANs also outperform Rényi-DV GAN (21), as the latter is highly unstable when α > 1 and so the training generally encounters NaN after a small number of training epochs (hence we omit those results from the figure). We use the same ResNet neural network architecture as in (Gulrajani et al., 2017, Appendix F) and focus on evaluating the effect of different divergences. Here we let Γ be the set of 1-Lipschitz functions, implement via a gradient penalty. Note that DΓ,IC∞ performs significantly better than RΓ,ICα with large α, and the rescaled D Γ,IC α -GAN performs better that RΓ,ICα -GAN when α is large.
ACKNOWLEDGMENTS
The research of J.B., M.K. and L.R.-B. was partially supported by the Air Force Office of Scientific Research (AFOSR) under the grant FA9550-21-1-0354. The research of M. K. and L.R.-B. was partially supported by the National Science Foundation (NSF) under the grants DMS-2008970 and TRIPODS CISE-1934846. The research of P.D. was partially supported by the NSF under the grant DMS-1904992 and by the AFOSR under the grant FA-9550-21-1-0354. The work of Y.P. was partially supported by the Hellenic Foundation for Research and Innovation (HFRI) through the “Second Call for HFRI Research Projects to support Faculty Members and Researchers” under Project 4753. This work was performed in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative.
A DERIVATION OF VARIATIONAL FORMULAS FOR THE CLASSICAL RÉNYI DIVERGENCES
In this appendix we provide several variational formulas for the classical Rényi divergences, some of which are new. In the following we let (Ω,M) denote a measurable space, M(Ω) be the space of measurable real-valued functions on Ω, Mb(Ω) the subspace of bounded functions, and P(Ω) will be the space of probability measures on Ω.
First we recall the Rényi-Donsker-Varadhan variational formula derived in Birrell et al. (2021). This is a generalization of the Donsker-Varadhan variational representation of the KL divergence.
Theorem A.1 (Rényi-Donsker-Varadhan Variational Formula). Let P and Q be probability measures on (Ω,M) and α ∈ R, α ̸= 0, 1. Then for any set of functions, Φ, with Mb(Ω) ⊂ Φ ⊂ M(Ω) we have
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , (25)
where we interpret ∞−∞ ≡ −∞ and −∞+∞ ≡ −∞.
If in addition (Ω,M) is a metric space with the Borel σ-algebra then (25) holds for all Φ that
satisfy Lipb(Ω) ⊂ Φ ⊂ M(Ω), where Lipb(Ω) is the space of bounded Lipschitz functions on Ω (i.e., Lipschitz for any Lipschitz constant L ∈ (0,∞)).
Using Theorem A.1 we can derive a new variational representation that takes the form of a convex conjugate. Theorem A.2 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (26)
If (Ω,M) is a metric space with the Borel σ-algebra then (26) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
Proof. Let Φ = {α−1 log(−h) : h ∈ Mb(Ω), h < 0}. We have Mb(Ω) ⊂ Φ ⊂ M(Ω), hence Theorem A.1 implies
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } (27)
= sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ e(α−1)(α
−1 log(−h))dP − 1 α log
∫ eα(α −1 log(−h))dQ } = sup h∈Mb(Ω):h<0 { 1 α− 1 log ∫ |h|(α−1)/αdP − 1 α log ∫ (−h)dQ } .
Note that the second term is finite but the first term is possibly infinite when α ∈ (0, 1). Next use the identity
log(c) = inf z∈R
{z − 1 + ce−z} , c ∈ (0,∞) (28)
in the second term to write
Rα(P∥Q) = sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP − 1
α inf z∈R
{z − 1 + e−z ∫ (−h)dQ} } (29)
=sup z∈R sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP + z − 1
α + α−1e−z
∫ hdQ } .
For each z ∈ R make the change variables h = αezg, g ∈ Mb(Ω), g < 0 in the inner supremum to derive
Rα(P∥Q) = sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |αezg|(α−1)/αdP − z − 1
α + α−1e−z
∫ αezgdQ } (30)
=sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |g|(α−1)/αdP + (α−1(logα+ 1) + ∫ gdQ) } = sup g∈Mb(Ω):g<0 {∫ gdQ+ 1 α− 1 log ∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
This completes the proof of (26). The proof of the metric-space version in nearly identical.
Remark A.3. To reverse the above derivation and obtain (25) (with Φ = {ϕ ∈ M(Ω) : ϕ is bounded above}) from (26), change variables g 7→ −c exp(αϕ), ϕ ∈ Φ, c > 0 in (26) and then maximize over c. Corollary A.4. If X is a compact metric space with the Borel σ-algebra, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞) then Cb(X) = C(X) and so
Rα(P∥Q) = sup g∈C(X):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (31)
Next we derive a variational formula for the worst case regret, defined by
D∞(P∥Q) := lim α→∞ αRα(P∥Q) . (32)
Theorem A.5. Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (33)
If Ω is a metric space with the Borel σ-algebra then (33) holds with Mb(Ω) replaced by Cb(Ω).
Remark A.6. Note that on a compact metric space, the space of bounded continuous functions is the same as the space of all continuous functions.
Proof. Recall Van Erven & Harremos (2014)
D∞(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (34)
First suppose P ̸≪ Q. Then there exists a measurable set A with Q(A) = 0 and P (A) > 0. Let gn = −n1A − 1Ac . Then
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gndQ+ log ∫ |gn|dP + 1 (35)
=− nQ(A)−Q(Ac) + log(nP (A) + P (Ac)) + 1 = log(nP (A) + P (Ac)) → ∞ (36)
as n→ ∞. Therefore
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 = ∞ = D∞(P∥Q) . (37)
Now suppose P ≪ Q. Using the definition (32) along with Theorem A.2 and changing variables g = g̃/α we have
D∞(P∥Q) = lim α→∞ αRα(P∥Q) (38)
= lim α→∞
[ sup
g∈Mb(Ω):g<0
{∫ αgdQ+ α
α− 1 log
∫ |g|(α−1)/αdP } + (logα+ 1) ]
≥ lim α→∞
[∫ g̃dQ+ α
α− 1 log
∫ |g̃/α|(α−1)/αdP + (logα+ 1) ] = lim α→∞ [∫ g̃dQ+ α α− 1 log ∫ |g̃|(α−1)/αdP + 1
] = ∫ g̃dQ+ log ∫ |g̃|dP + 1 for all g̃ ∈ Mb(Ω), g̃ < 0.
Here we used the dominated convergence theorem to evaluate the limit. Hence, by maximizing over g̃ we obtain
D∞(P∥Q) ≥ sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (39)
To prove the reverse inequality, take any r ∈ (0, ess supP dP/dQ). By definition of the essential supremum we have P (dP/dQ > r) > 0. We also have the bound
P (dP/dQ > r) = ∫ 1dP/dQ>r dP
dQ dQ ≥
∫ 1dP/dQ>rrdQ = rQ(dP/dQ > r) . (40)
For c, ϵ > 0 define gc,ϵ = −c1dP/dQ>r − ϵ. These satisfy gc,ϵ ∈ Mb(Ω), gc,ϵ < 0 and so
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gc,ϵdQ+ log ∫ |gc,ϵ|dP + 1 (41)
=− cQ(dP/dQ > r)− ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 ≥− cP (dP/dQ > r)/r − ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 .
Letting ϵ→ 0+ we find
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥− cP (dP/dQ > r)/r + log(cP (dP/dQ > r)) + 1
(42)
for all c > 0. We have P (dP/dQ > r) > 0, hence by maximizing over c > 0 and changing variables to z = cP (dP/dQ > r) we obtain
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ sup
z>0 {−z/r + log(z) + 1} = log(r) . (43)
This holds for all r < ess supP dP/dQ, therefore we can take r ↗ ess supP dP/dQ and use (34) to conclude
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ log(ess supP dP/dQ) = D∞(P∥Q) . (44)
Combining this with (39) completes the proof of (33).
Now suppose Ω is a metric space. We clearly have
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 (45)
≥ sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 .
To prove the reverse inequality, take any g ∈ Mb(Ω) with g < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ and hϵ ∈ Cb(Ω) such that P (Ecϵ ) ≤ ϵ, Q(Ecϵ ) ≤ ϵ, hϵ|Eϵ = g, and inf g ≤ hϵ ≤ 0. Define gϵ = hϵ − ϵ. Then gϵ < 0, gϵ ∈ Cb(Ω) and we have
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gϵdQ+ log ∫ |gϵ|dP (46)
= ∫ gdQ+ ∫ (hϵ − g)1Ecϵ dQ− ϵ+ log( ∫ |g|dP + ∫ (|hϵ| − |g|)1Ecϵ dP + ϵ)
≥ ∫ gdQ− (sup g − inf g)Q(Ecϵ )− ϵ+ log( ∫ |g|dP + inf gP (Ecϵ ) + ϵ)
≥ ∫ gdQ− (sup g − inf g)ϵ− ϵ+ log( ∫ |g|dP + inf gϵ+ ϵ) .
Taking the limit ϵ→ 0+ we therefore obtain
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gdQ+ log ∫ |g|dP . (47)
This holds for all g ∈ Mb(Ω) with g < 0, hence by taking the supremum over g we obtain the reverse inequality to (45). This completes the proof.
B PROOFS
In this appendix we provide a number of proofs that were omitted from the main text. Recall that X denotes a compact metric space.
Lemma B.1. Let Γ ⊂ C(X) and P,Q ∈ P(X). Then αRΓ/α,ICα (P∥Q) is non-decreasing in α ∈ (0, 1) ∪ (1,∞). If 0 ∈ Γ then αRΓ,ICα (P∥Q) is also non-decreasing.
Proof. If 0 ∈ Γ then WΓ ≥ 0, hence
αRΓ,ICα (P∥Q) = inf η∈P(X) {αRα(P∥η) + αWΓ(Q, η)} (48)
where both α 7→ αRα(P∥η) and α 7→ αWΓ(Q, η) are non-decreasing. Therefore the infimum is as well. The proof for α 7→ αRΓ/α,ICα (P∥Q) is similar, though it doesn’t require the assumption 0 ∈ Γ due to the identity αWΓ/α =WΓ.
Next we prove a key lemma that is used in our main result. First recall the definition ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , g ∈ C(X). (49)
Lemma B.2. ΛPα is convex and is continuous on {g ∈ C(X) : g < 0}, an open subset of C(X).
Proof. First we prove convexity. Let g0, g1 ∈ {C(X) : g < 0} and λ ∈ (0, 1). For α ∈ (0, 1) we can use the inequality λa+ (1− λ)b ≥ aλb1−λ for all a, b > 0 to compute
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP ≤ − 1
α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP . (50)
Using Hölder’s inequality with exponents p = 1/λ, q = 1/(1− λ) we then obtain
− 1 α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP (51)
≤− 1 α− 1 log
(∫ |g1|(α−1)/αdPλ ∫ |g0|(α−1)/αdP 1−λ ) =λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
Therefore g 7→ − 1α−1 log ∫ |g|(α−1)/αdP is convex on {g < 0}. This proves ΛPα is convex when α ∈ (0, 1).
Now suppose α > 1. The map t > 0, t 7→ t(α−1)/α is concave and − log is decreasing and convex, hence
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP (52)
≤− 1 α− 1 log
( λ ∫ |g1|(α−1)/αdP + (1− λ) ∫ |g0|(α−1)/αdP ) ≤λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
This proves that ΛPα is also convex when α > 1. Openness of {g < 0} follows from the assumption that X is compact and so any strictly negative continuous function is strictly bounded away from zero. Continuity on {g < 0} then follows from the dominated convergence theorem.
Now we prove our main theorem, deriving the dual variational formula and other important properties of the IC-Γ-Rényi divergences. Theorem B.3. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(53)
2. If (53) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (54)
3. RΓ,ICα (P∥Q) is convex in Q. If α ∈ (0, 1) then RΓ,ICα (P∥Q) is jointly convex in (P,Q).
4. (P,Q) 7→ RΓ,ICα (P∥Q) is lower semicontinuous.
5. RΓ,ICα (P∥Q) ≥ 0 with equality if P = Q.
6. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
7. If Γ is strictly admissible then RΓ,ICα has the divergence property.
Proof. 1. Define F,G : C(X) → (−∞,∞] by F = ΛPα and G[g] = ∞1g ̸∈Γ −EQ[g]. Using the assumptions on Γ along with Lemma B.2 we see that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore Fenchel-Rockafellar duality (see, e.g., Theorem 4.4.3 in Borwein & Zhu (2006)) along with the identity C(X)∗ =M(X) gives
sup g∈C(X) {−F [g]−G[g]} = inf η∈M(X)
{F ∗[η] +G∗[−η]} , (55)
and if either side is finite then the infimum on the right hand side is achieved at some η∗ ∈M(X). Using the definitions, we can rewrite the left hand side as follow
sup g∈C(X)
{−F [g]−G[g]} (56)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
We can also compute
G∗[−η] = sup g∈C(X)
{− ∫ gdη − (∞1g ̸∈Γ − EQ[g])} =WΓ(Q, η) . (57)
Therefore
inf η∈M(X)
{(ΛPα )∗[η] +WΓ(Q, η)} (58)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
Next we show that the infimum over M(X) can be restricted to P(X). First suppose η ∈ M(X) with η(X) ̸= 1. Then, using the assumption that Γ contains the constant functions, we have
WΓ(Q, η) ≥ EQ[±n]− ∫ ±ndη = ±n(1− η(X)) → ∞ (59)
as n→ ∞ (for appropriate choice of sign). Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. This implies that the infimum can be restricted to {η ∈M(X) : η(X) = 1}. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function
gϵ ∈ C(X) such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0}, hence
(ΛPα ) ∗[η] ≥ ∫ gn,ϵdη + 1
α− 1 log
∫ |gn,ϵ|(α−1)/αdP + α−1(logα+ 1) (60)
=− nη(A) + nη(A ∩ Ecϵ )− n ∫ gϵ1Ecϵ dη − η(X)
+ 1
α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1)
≥n(|η(A)| − 2ϵ)− η(X) + 1 α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1) .
If α > 1 then log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and if α ∈ (0, 1) then log ∫ |ngϵ +
1|(α−1)/αdP ≤ 0. In either case we have 1α−1 log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and so
(ΛPα ) ∗[η] ≥n(|η(A)| − 2ϵ)− η(X) + α−1(logα+ 1) . (61)
By choosing ϵ < |η(A)|/2 and taking n → ∞ we see that (ΛPα )∗[η] = ∞ whenever η ∈ M(X) is not positive. Therefore the infimum can further be restricted to positive measures. Combining these results we find
sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) (62)
= inf η∈P(X)
{(ΛPα )∗[η] +WΓ(Q, η)} .
For η ∈ P(X), equation (10) implies (ΛPα )∗[η] = Rα(P∥η). This completes the proof.
2. The existence of a minimizer follows from Fenchel-Rockafellar duality; again, see Theorem 4.4.3 in Borwein & Zhu (2006).
3. This follows from (53) together with the fact that the supremum of convex functions is convex and y 7→ 1α−1 log(y) is convex when α ∈ (0, 1).
4. Compactness of X implies that g and |g|(α−1)/α are bounded and continuous whenever g ∈ Γ satisfies g < 0. Therefore Q→ ∫ gdQ and P → ∫ |g|(α−1)/αdP are continuous in
the weak topology on P(X). Therefore the objective functional in (53) is continuous in (P,Q). The supremum is therefore lower semicontinuous.
5. This easily follows from the definition (7).
6. Rα is a divergence, hence is non-negative. Γ contains the constant functions, henceWΓ ≥ 0. Therefore RΓ,ICα ≥ 0. If Q = P then 0 ≤ RΓ,ICα (P∥Q) ≤ Rα(P∥P ) +WΓ(P, P ) = 0, hence RΓ,ICα (P∥Q) = 0.
7. Suppose Γ is strictly admissible. Due to part 5 of this theorem, we only need to show that if RΓ,ICα (P∥Q) = 0 then P = Q. If RΓ,ICα (P∥Q) = 0 then part 2 implies there exists η∗ ∈ P(X) such that
0 = Rα(P∥η∗) +WΓ(Q, η∗) . (63)
Both terms are non-negative, hence Rα(P∥η∗) = 0 =WΓ(Q, η∗). Rα has the divergence property, hence η∗ = P . So WΓ(Q,P ) = 0. Therefore 0 ≥ ∫ gdQ− ∫ gdP for all g ∈ Γ. Let Ψ be as in the definition of strict admissibility and let ψ ∈ Ψ. There exists c ∈ R, ϵ > 0 such that c± ϵψ ∈ Γ and so 0 ≥ ±ϵ( ∫ ψdQ− ∫ ψdP ). Therefore ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ. Ψ is P(X)-determining, hence Q = P .
Remark B.4. The Fenchel-Rockafellar Theorem applies under two different sets of assumptions: the first assumes both mappings are lower semicontinuous (LSC) while the second applies when one mapping is continuous at a point where both are finite. The mapping ΛPα , as defined by (49) and
appearing in (10), is not LSC but it is continuous on its domain, hence we used the second version of Fenchel-Rockafellar in our proof of Theorem B.3. For α > 1 one could alternatively redefine ΛPα along the boundary of {g < 0} to make it LSC while still maintaining the relation (10) and thereby utilize the first version of Fenchel-Rockafellar. This alternative approach is also amenable to extending the theorem to non-compact spaces, using the methods from Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a). However, these methods do not apply to α ∈ (0, 1). With this in mind, in order to provide a simple unified treatment of all α ∈ (0, 1) ∪ (1,∞) we structured our proof around the second version of Fenchel-Rockafellar.
Despite the fact that ΛPα is not LSC, the Fenchel-Rockafellar Theorem does imply that convex duality holds at all points of continuity in the domain, i.e., one has
ΛPα [g] = sup η∈M(X)
{ ∫ gdη −Rα(P∥η)} for all g < 0 , (64)
but this duality formula doesn’t necessarily hold if g ̸< 0. Here, Rα(P∥η) for general η ∈M(X) is defined via the variational formula
Rα(P∥η) := (ΛPα )∗[η] = sup g∈C(X)
{ ∫ gdη − ΛPα [g]} (65)
and one can rewrite this in terms of the classical Rényi divergence as follows
Rα(P∥η) = { ∞ if η ̸≥ 0 or η = 0 , Rα(P∥ η∥η∥ )− 1 α log ∥η∥ if η is a nonzero positive measure.
(66)
Next we prove the limiting results from Theorem 4.1. Theorem B.5. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then
lim δ→0+
1 δ RδΓ,ICα (P∥Q) =WΓ(Q,P ) (67)
and if Γ is strictly admissible we have
lim L→∞
RLΓ,ICα (P∥Q) = Rα(P∥Q) . (68)
Proof. It is straightforward to show that the scaled function spaces are admissible and W cΓ = cWΓ for all c > 0. First we prove (67). From the definition 7 we have
δ−1RδΓ,ICα (P∥Q) = inf η∈P(X) {δ−1Rα(P∥η) +WΓ(Q, η)} ≤WΓ(Q,P ) (69)
and so δ−1RδΓ,ICα (P∥Q) is non-increasing in δ. Therefore
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = sup δ>0 δ−1RδΓ,ICα (P∥Q) (70)
and
lim δ→0+
δ−1RδΓ,ICα (P∥Q) ≤WΓ(Q,P ) . (71)
We will assume this inequality is strict and derive a contradiction. This assumption, together with (70), implies RδΓ,ICα (P∥Q) <∞ for all δ > 0. Part (2) of Theorem 3.4 then implies the existence of η∗,δ ∈ P(X) such that
δ−1RδΓ,ICα (P∥Q) = δ−1Rα(P∥η∗,δ) +WΓ(Q, η∗,δ) ≥WΓ(Q, η∗,δ) . (72) Take a sequence δn → 0+. We have assumed X is compact, hence P(X) is also compact and so there exists a weakly convergent subsequence η∗,δnj → η∗. From the variational formulas (25) and (8) we see that Rα(P∥·) and WΓ(Q, ·) are lower semicontinuous, hence lim infjWΓ(Q, η∗,δnj ) ≥ WΓ(Q, η∗) and
Rα(P∥η∗) ≤ lim inf j Rα(P∥η∗,δnj ) ≤ lim infj δnj (δ −1 nj Rα(P∥η∗,δnj ) +W Γ(Q, η∗,δnj )) (73)
= lim inf j
δnj (δ −1 nj R δnjΓ,IC α (P∥Q)) = 0 , (74)
where the last equality follows from the assumed strictness of the inequality (70). Therefore the divergence property for the classical Rényi divergences implies Rα(P∥η∗) = 0 and P = η∗. Combining the above results we obtain
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = lim j→∞ δ−1nj R δnjΓ,IC α (P∥Q) ≥ lim inf j WΓ(Q, η∗,δnj ) (75)
≥WΓ(Q, η∗) =WΓ(Q,P ) . This contradicts (71) and therefore we have proven the equality (67).
Now we assume Γ is strictly admissible and will prove (68) via similar reasoning. From the definition 7 we see that
RLΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) + LWΓ(Q, η)} ≤ Rα(P∥Q) (76)
and RLΓ,ICα is non-decreasing in L. Hence limL→∞R LΓ,IC α (P∥Q) = supL>0RLΓ,ICα (P∥Q) and
lim L→∞
RLΓ,ICα (P∥Q) ≤ Rα(P∥Q) . (77)
Suppose this inequality is strict. Then RLΓ,ICα (P∥Q) < ∞ for all L and we can use part (2) of Theorem 3.4 to conclude there exists η∗,L ∈ P(X) such that
RLΓ,ICα (P∥Q) = Rα(P∥η∗,L) + LWΓ(Q, η∗,L) . (78) Take Ln → ∞. Compactness of P(X) implies the existence of a weakly convergent subsequence η∗,j := η∗,Lnj → η∗ ∈ P(X). Lower semicontinuity of Rα(P∥·) and W
Γ(Q, ·) imply lim infj Rα(P∥η∗,j) ≥ Rα(P∥η∗) and
WΓ(Q, η∗) ≤ lim inf j WΓ(Q, η∗,j) = lim inf j L−1nj W LnjΓ(Q, η∗,j) (79)
≤ lim inf j
L−1nj R LnjΓ,IC α (P∥Q) = 0 ,
where the last equality follows from the assumed strictness of the inequality (77). Therefore WΓ(Q, η∗) = 0. Γ is strictly admissible, hence Q = η∗ (see the proof of part (7) of Theorem 3.4). Combining these results we see that
lim L→∞ RLΓ,ICα (P∥Q) = lim j R LnjΓ,IC α (P∥Q) = lim j (Rα(P∥η∗,Lnj ) + LnjW Γ(Q, η∗,Lnj )) (80)
≥ lim inf j Rα(P∥η∗,j) ≥ Rα(P∥η∗) = Rα(P∥Q) .
This contradicts the assumed strictness of the inequality (77) and hence (77) is an equality. This completes the proof.
Next we prove Theorem 4.2, regarding the α→ 1 limit of the IC-Γ-Rényi divergences. Theorem B.6. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (81)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (82)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (83)
Proof. Lemma B.1 implies α 7→ αRΓ,ICα (P∥Q) is non-decreasing on (1,∞), therefore lim α→1+ αRΓ,ICα (P∥Q) = inf α>1 αRΓ,ICα (P∥Q) (84)
= inf α>1 inf η∈P(X)
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) inf α>1
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) { lim α→1+ αRα(P∥η) +WΓ(Q, η)} .
From Van Erven & Harremos (2014) we have
lim α→1+ Rα(P∥η) = { R(P∥η) if ∃β > 1, Rβ(P∥η) <∞ ∞ otherwise (85)
and so we can conclude
lim α→1+ RΓ,ICα (P∥Q) = lim α→1+ αRΓ,ICα (P∥Q) = inf η∈P (X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} . (86)
This proves (81).
Now we compute the limit as α → 1−. Note that the limit exists due to the fact that α 7→ αRΓ,ICα (P∥Q) is non-decreasing. From the definition (7), for all η ∈ P(X) we have
lim α→1− RΓ,ICα (P∥Q) ≤ lim α→1− Rα(P∥η) +WΓ(Q, η) = R(P∥η) +WΓ(Q, η) . (87)
Here we used the fact that limα→1− Rα(P∥η) = R(P∥η) (see Van Erven & Harremos (2014)). Maximizing over η then gives
lim α→1− RΓ,ICα (P∥Q) ≤ inf η∈P(X) {R(P∥η) +WΓ(Q, η)} . (88)
To prove the reverse inequality, use part 1 of Theorem 3.4 to compute
lim α→1− RΓ,ICα (P∥Q) = lim α→1− αRΓ,ICα (P∥Q) (89)
= lim α→1− sup g∈Γ:g<0
{ α ∫ gdQ+ α
α− 1 log
∫ |g|(α−1)/αdP + logα+ 1 } ≥ ∫ gdQ+ lim
α→1− α α− 1 log
∫ |g|(α−1)/αdP + 1
= ∫ gdQ+ d
dy |y=0 log
∫ ey log |g|dP + 1
= ∫ gdQ+ ∫ log |g|dP + 1
for all g ∈ Γ, g < 0. Therefore, maximizing over g gives
lim α→1− RΓ,ICα (P∥Q) ≥ sup g∈Γ:g<0
{∫ gdQ+ ∫ log |g|dP } + 1 . (90)
We now use Fenchel-Rockafellar duality (Theorem 4.4.3 in Borwein & Zhu (2006)) to compute the dual variational representation of the right hand side of (90). Define F,G : C(X) → (−∞,∞] by F [g] = ∞1g ̸<0 − ∫ log |g|dP and G[g] = ∞1g ̸∈Γ − EQ[g]. It is stratightforward to show that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore
inf g∈C(X) {F [g] +G[g]} = sup η∈E∗
{−F ∗(−η)−G∗(η)} , (91)
i.e.
sup g∈Γ:g<0
{EQ[g] + ∫
log |g|dP} = inf η∈M(X)
{F ∗(η) +WΓ(Q, η)} , (92)
where F ∗(η) = supg∈C(X):g<0{ ∫ gdη + ∫ log |g|dP}. Now we show the infimum can be restricted to η ∈ P(X): If η(X) ̸= 1 then by taking g = ±n we find
WΓ(Q, η) ≥ n|Q(X)− η(X)| → ∞ (93)
as n→ ∞. Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function gϵ ∈ C(X)
such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0 | 1. What is the focus and contribution of the paper regarding divergence?
2. What are the strengths of the proposed approach, particularly its properties and computational benefits?
3. What are the weaknesses of the paper, especially regarding its limitations and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any concerns or questions regarding the applicability and effectiveness of the proposed divergence in real-world scenarios? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a new divergence which is a convolution between Renyi divergence and other IPM.
Strengths And Weaknesses
Strength:
The paper proposes a new divergence which include the Renyi divergence and other IPM. It is new and have some interesting properties.
The author derives some properties of the new divergence and show its dual form for computation.
They derive the limit formula of divergence and present some special case of the new divergence.
Empirical results show that the new divergence is more stable the Renyi divergence when
α
>
1
.
Weaknesses:
The derivations of mathematical results of the paper are quite straightforward from known results.
It is not clear for me the popularity of using new divergence in reality when
α
>
1
. For
α
<
1
, I do not see any advantage to have new divergence.
In the application on biological datasets, the best results are obtained when
α
=
∞
. It means that we do not need to care about other values of
α
∈
(
1
,
∞
)
, then why do we need to define the new divergence.
In the experiment with GAN, the proposed divergence is used to improve the performance of generative model. The results are shown through the FID scores. We need to choose a particular
α
such that it obtains the lowest FID. In this case, is there any explanation of using the proposed divergence rather than searching something in a larger space will obtain a lower minimum.
Clarity, Quality, Novelty And Reproducibility
The paper is clear, well-written and easy to follow. |
ICLR | Title
Function-space regularized Rényi divergences
Abstract
We propose a new family of regularized Rényi divergences parametrized not only by the order α but also by a variational function space. These new objects are defined by taking the infimal convolution of the standard Rényi divergence with the integral probability metric (IPM) associated with the chosen function space. We derive a novel dual variational representation that can be used to construct numerically tractable divergence estimators. This representation avoids risk-sensitive terms and therefore exhibits lower variance, making it well-behaved when α > 1; this addresses a notable weakness of prior approaches. We prove several properties of these new divergences, showing that they interpolate between the classical Rényi divergences and IPMs. We also study the α→ ∞ limit, which leads to a regularized worst-case-regret and a new variational representation in the classical case. Moreover, we show that the proposed regularized Rényi divergences inherit features from IPMs such as the ability to compare distributions that are not absolutely continuous, e.g., empirical measures and distributions with lowdimensional support. We present numerical results on both synthetic and real datasets, showing the utility of these new divergences in both estimation and GAN training applications; in particular, we demonstrate significantly reduced variance and improved training performance.
1 INTRODUCTION
Rényi divergence, Rényi (1961), is a significant extension of Kullback-Leibler (KL) divergence for numerous applications; see, e.g., Van Erven & Harremos (2014). The recent neural-based estimators for divergences Belghazi et al. (2018) along with generative adversarial networks (GANs) Goodfellow et al. (2014) accelerated the use of divergences in the field of deep learning. The neural-based divergence estimators are feasible through the utilization of variational representation formulas. These formulas are essentially lower bounds (and, occasionally, upper bounds) which are approximated by tractable statistical averages. The estimation of a divergence based on variational formulas is a notoriously difficult problem. Challenges include potentially high bias that may require an exponential number of samples McAllester & Stratos (2020) or the exponential statistical variance for certain variational estimators Song & Ermon (2019), rendering divergence estimation both data inefficient and computationally expensive. This is especially prominent for Rényi divergences with order larger than 1. Indeed, numerical simulations have shown that, unless the distributions P and Q are very close to one another, the Rényi divergence Rα(P∥Q) is almost intractable to estimate when α > 1 due to the high variance of the statistically-approximated risk-sensitive observables Birrell et al. (2021), see also the recent analysis in Lee & Shin (2022). A similar issue has also been observed for the KL divergence, Song & Ermon (2019). Overall, the lack of estimators with low variance for Rényi divergences has prevented wide-spread and accessible experimentation with this class of information-theoretic tools, except in very special cases. We hope our results here will provide a suitable set of tools to address this gap in the methodology.
One approach to variance reduction is the development of new variational formulas. This direction is especially fruitful for the estimation of mutual information van den Oord et al. (2018); Cheng et al. (2020). Another approach is to regularize the divergence by restricting the function space of the variational formula. Indeed, instead of directly attacking the variance issue, the function space of the variational formula can be restricted, for instance, by bounding the test functions or more appropriately by bounding the derivative of the test functions. The latter regularization leads to
Lipschitz continuous function spaces which are also foundational to integral probability metrics (IPMs) and more specifically to the duality property of the Wasserstein metric. In this paper we combine the above two approaches, first deriving a new variational representation of the classical Rényi divergences and then regularizing via an infimal-convolution as follows
RΓ,ICα (P∥Q) := inf η {Rα(P∥η) +WΓ(Q, η)} , (1)
where P and Q are the probability distributions being compared, the infimum is over the space of probability measures, Rα is the classical Rényi divergence, and WΓ is the IPM corresponding to the chosen regularizing function space, Γ.
The new family of regularized Rényi divergences that are developed here address the risk-sensitivity issue inherent in prior approaches. More specifically, our contributions are as follows.
• We define a new family of function-space regularized Rényi divergences via the infimal convolution operator between the classical Rényi divergence and an arbitrary IPM (1). The new regularized Rényi divergences inherit their function space from the IPM. For instance, they inherit mass transport properties when one regularizes using the 1-Wasserstein metric.
• We derive a dual variational representation (11) of the regularized Rényi divergences which avoids risk-sensitive terms and can therefore be used to construct lower-variance statistical estimators.
• We prove a series of properties for the new object: (a) the divergence property, (b) being bounded by the minimum of the Rényi divergence and IPM, thus allowing for the comparison of non-absolutely continuous distributions, (c) limits as α→ 1 from both left and right, (d) regimes in which the limiting cases Rα(P∥Q) and WΓ(Q,P ) are recovered.
• We propose a rescaled version of the regularized Rényi divergences (16) which lead to a new variational formula for the worst-case regret (i.e., α→ ∞). This new variational formula does not involve the essential supremum of the density ratio as in the classical definition of worst-case regret, thereby avoiding risk-sensitive terms.
• We present a series of illustrative examples and counterexamples that further motivate the proposed definition for the function-space regularized Rényi divergences.
• We present numerical experiments that show (a) that we can estimate the new divergence for large values of the order α without variance issues and (b) train GANs using regularized function spaces.
Related work. The order of Rényi divergence controls the weight put on the tails, with the limiting cases being mode-covering and mode-selection Minka (2005). Rényi divergence estimation is used in a number of applications, including Sajid et al. (2022) (behavioural sciences), Mironov (2017) (differential privacy), and Li & Turner (2016) (variational inference); in the latter the variational formula is an adaptation of the evidence lower bound. Rényi divergences have been also applied in the training of GANs Bhatia et al. (2021) (loss function for binary classification - discrete case) and in Pantazis et al. (2022) (continuous case, based on the Rényi-Donsker-Varahdan variational formula in Birrell et al. (2021)). Rényi divergences with α > 1 are also used in contrastive representation learning, Lee & Shin (2022), as well as in PAC-Bayesian Bounds, Bégin et al. (2016). In the context of uncertainty quantification and sensitivity analysis, Rényi divergences provide confidence bounds for rare events, Atar et al. (2015); Dupuis et al. (2020), with higher rarity corresponding to larger α.
Reducing the variance of divergence estimators through control of the function space have been recently proposed. In Song & Ermon (2019) an explicit bound to the output restricts the divergence values. A systematic theoretical framework on how to regularize through the function space has been developed in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) for the KL and f -divergences. Despite not covering the Rényi divergence, the theory in Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a) and particularly the infimal-convolution formulation clearly inspired the current work. However, adapting the infimal-convolution method to the Rényi divergence setting requires two new technical innovations: (a) We develop a new low-variance convex-conjugate variational formula for the classical Rényi divergence in Theorem 2.1 (see also Fig. 1), allowing us to apply infimalconvolution tools to develop the new Γ-Rényi divergences in Theorem 3.4. (b) We study the α→ ∞ limit of (a) to obtain a new low-variance variational representation of worst-case regret in Theorem 2.2 and study its Γ-regularization in Theorem 4.5.
2 NEW VARIATIONAL REPRESENTATIONS OF CLASSICAL RÉNYI DIVERGENCES
The Rényi divergence of order α ∈ (0, 1) ∪ (1,∞) between P and Q, denoted Rα(P∥Q), can be defined as follows: Let ν be a sigma-finite positive measure with dP = pdν and dQ = qdν. Then
Rα(P∥Q) := 1α(α−1) log [∫ q>0 pαq1−αdν ] if 0 < α < 1 or α > 1 and P ≪ Q ,
+∞ if α > 1 and P ̸≪ Q , (2)
where P ≪ Q denotes absolute continuity of P with respect to Q. There always exists such a ν (e.g., ν = P +Q) and one can show that the definition (2) does not depend on the choice of ν. The Rα provide a notion of ‘distance’ between P and Q in that they satisfy the divergence property, i.e., they are non-negative and equal zero iff Q = P . The limit of Rα as α approaches 1 or 0 equals the KL or reverse KL divergence respectively Van Erven & Harremos (2014).
An alternative representation of Rα, the so-called Rényi-Donsker-Varadhan variational formula, was derived from (2) in Birrell et al. (2021),
Rα(P∥Q) = sup ϕ∈Mb(Ω)
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , P,Q ∈ P(Ω) . (3)
Here (Ω,M) denotes a measurable space, Mb(Ω) the space of bounded measurable real-valued functions on Ω, and P(Ω) is the space of probability measures on Ω. By a change of variables argument this can be transformed into the following new variational representation; see Theorem A.2 in Appendix A for a proof. We call it the convex-conjugate Rényi variational formula (CC-Rényi). Theorem 2.1 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (4)
If (Ω,M) is a metric space with the Borel σ-algebra then (4) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
The representation (4) is of convex-conjugate type, which will be key in our development of functionspace regularized Rényi divergences. It is also of independent interest as it avoids risk-sensitive terms, unlike (3) which contains cumulant-generating-functions. This makes (4) better behaved in estimation problems, especially when α > 1; see the example in Section 6.1 below.
We also obtain a new variational formula for worst-case regret, as defined by Van Erven & Harremos (2014)
D∞(P∥Q) := lim α→∞ αRα(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (5)
In contrast to (5), which requires estimation of the likelihood ratio, the new variational formula (6) below avoids risk-sensitive terms. Theorem 2.2 (Worst-case Regret Variational Formula). Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (6)
If Ω is a metric space with the Borel σ-algebra then (6) holds with Mb(Ω) replaced by Cb(Ω).
See Theorem A.5 in Appendix A for a proof. Equation (6) is a new result of independent interest and will also be useful in our study of the α → ∞ limit of the function-space regularized Rényi divergences that we define in the next section. Remark 2.3. Alternative variational formulas for D∞ on a finite alphabet were derived in Kurri et al. (2022).
t
3 PRIMAL AND DUAL FORMULATIONS OF THE INFIMAL-CONVOLUTION Γ-RÉNYI DIVERGENCES
We are now ready to define the function-space regularized Rényi divergences and derive their key properties. In this section, X will denote a compact metric space, P(X) will denote the set of Borel probability measures onX , andC(X) will denote the space of continuous real-valued functions onX . We equip C(X) with the supremum norm and recall that the dual space of C(X) is C(X)∗ =M(X), the space of finite signed Borel measures on X (see the Riesz representation theorem, e.g., Theorem 7.17 in Folland (2013)). Definition 3.1. Given a test-function space Γ ⊂ C(X), we define the infimal-convolution Γ-Rényi divergence (i.e., IC-Γ-Rényi divergence) between P,Q ∈ P(X) by
RΓ,ICα (P∥Q) := inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} , α ∈ (0, 1) ∪ (1,∞) , (7)
where WΓ denotes the Γ-IPM
WΓ(µ, ν) := sup g∈Γ
{ ∫ gdµ− ∫ gdν} , µ, ν ∈M(X) . (8)
Remark 3.2. The classical Rényi divergence is convex in its second argument but not in its first when α > 1 Van Erven & Harremos (2014). This is the motivation for defining the IC-Γ-Rényi divergences via an infimal convolution in the second argument of Rα; convex analysis tools will be critical in deriving properties of RΓ,ICα below. For α ∈ (0, 1) one can use the identity Rα(P∥Q) = R1−α(Q∥P ) to rewrite (7) as an infimal convolution in the first argument.
The definition (7) can be thought of as a regularization of the classical Rényi divergence using the Γ-IPM. For computational purposes it is significantly more efficient to have a dual formulation, i.e., a representation of RΓ,ICα in terms of a supremum over a function space. To derive such a representation we begin with the variational formula for Rα from Theorem 2.1. If we define the convex mapping ΛPα : C(X) → (−∞,∞],
ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , (9)
then (4) from Theorem 2.1 can be written as a convex conjugate
Rα(P∥Q) = (ΛPα )∗[Q] := sup g∈C(X)
{ ∫ gdQ− ΛPα [g]} . (10)
One can then use Fenchel-Rockafellar duality to derive a dual formulation of the IC-Γ-Rényi divergences. To apply this theory we will need to work with spaces of test functions that satisfy the following admissibility properties. These properties are similar to those used in the construction of regularized KL and f -divergences in Dupuis, Paul & Mao, Yixiang (2022) and Birrell et al. (2022a). Definition 3.3. We will call Γ ⊂ C(X) admissible if it is convex and contains the constant functions. We will call an admissible Γ strictly admissible if there exists a P(X)-determining set Ψ ⊂ C(X) such that for all ψ ∈ Ψ there exists c ∈ R, ϵ > 0 such that c ± ϵψ ∈ Γ. Recall that Ψ being P(X)-determining means that for all Q,P ∈ P(X), if ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ then Q = P .
Putting the above pieces together one obtains the following variational representation. Theorem 3.4. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(11)
2. If (11) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (12)
3. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
4. If Γ is strictly admissible then RΓ,ICα has the divergence property.
See Theorem B.3 in Appendix B for detailed proofs of these results as well as several additional properties. We note that there are alternative strategies for proving the variational formula (11) which make different assumptions; further comments on this can be found in Remark B.4. Important examples of strictly admissible Γ include the following:
1. Γ = C(X), which leads to the classical Rényi-divergences.
2. Γ = Lip1(X), i.e. all 1-Lipschitz functions. This regularizes the Rényi divergences via the Wasserstein metric.
3. Γ = {c + g : c ∈ R, g ∈ C(X), |g| ≤ 1}. This regularizes the Rényi divergences via the total-variation metric.
4. Γ = {c+ g : c ∈ R, g ∈ Lip1(X), |g| ≤ 1}. This regularizes the Rényi divergences via the Dudley metric.
5. Γ = {c + g : c ∈ R, g ∈ Y : ∥g∥V ≤ 1}, the unit ball in a RKHS V ⊂ C(X). This regularizes the Rényi divergences via MMD.
In practice, uniform bounds can be implemented using an appropriately chosen final NN layer. Lipschitz bounds can be implemented using spectral normalization of neural networks Miyato et al. (2018), or using a soft gradient penalty Gulrajani et al. (2017). The function space Γ for structurepreserving GANs discussed in the Appendix is implemented using equivariant neural networks, Birrell et al. (2022b). If Γ is a ball in an RKHS space the implementation is carried out using the same tools used in, e.g., MMD distances and divergences, Gretton et al. (2012); Glaser et al. (2021).
The IC-Γ-Rényi divergences also satisfy a data processing inequality. See Theorem B.8 in Appendix B for a proof as well as details regarding the notation. Theorem 3.5 (Data Processing Inequality). Let α ∈ (0, 1) ∪ (1,∞), Q,P ∈ P(X), and K be a probability kernel from X to Y such that K[g] ∈ C(X) for all g ∈ C(X,Y ). If Γ ⊂ C(Y ) is admissible then RΓ,ICα (K[P ]∥K[Q]) ≤ R K[Γ],IC α (P∥Q). If Γ ⊂ C(X × Y ) is admissible then RΓ,ICα (P ⊗K∥Q⊗K) ≤ R K[Γ],IC α (P∥Q).
If K[Γ] is strictly contained in Γ then the bounds in Theorem 3.5 can be strictly tighter than the classical data processing inequality Van Erven & Harremos (2014). Data-processing inequalities are important for constructing symmetry-preserving GANs; see Birrell et al. (2022b) and Section D.1.
4 LIMITS, INTERPOLATIONS, AND REGULARIZED WORST-CASE REGRET
Next we use Theorem 3.4 to compute various limits of the IC-Γ-Rényi divergences. First we show that they interpolate between Rα and WΓ in the following sense (see Theorem B.5 for a proof). Theorem 4.1. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞).
1. limδ→0+ 1δR δΓ,IC α (P∥Q) =WΓ(Q,P ),
2. If Γ is strictly admissible then limL→∞RLΓ,ICα (P∥Q) = Rα(P∥Q).
Now we discuss the limiting behavior in α. These results generalize several properties of the classical Rényi divergences Van Erven & Harremos (2014). First we consider the α→ 1 limit; see Theorem B.6 for a proof. Theorem 4.2. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (13)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (14)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (15)
Remark 4.3. When Γ = C(X), changing variables to g = − exp(ϕ− 1) transforms (15) into the Legendre-transform variational formula for R(P∥Q); see equation (1) in Birrell et al. (2022c) with f(x) = x log(x). Eq. (14) is an infimal convolution of the reverse KL-divergence, as opposed to the results in Dupuis, Paul & Mao, Yixiang (2022) which apply to the (forward) KL-divergence.
Function-space regularized worst-case regret. Next we investigate the α → ∞ limit of the IC-Γ-Rényi divergences, which will lead to the function-space regularized worst-case regret. First recall that some authors use an alternative definition of the classical Rényi divergences, related to the one used in this paper by Dα(·∥·) := αRα(·∥·). This alternative definition has the useful property of being non-decreasing in α; see Van Erven & Harremos (2014). Appropriately rescaled, the IC-Γ-Rényi divergence also satisfies this property, leading to the following definition. Definition 4.4. For Γ ⊂ C(X), α ∈ (0, 1) ∪ (1,∞) and P,Q ∈ P(X) we define
DΓ,ICα (P∥Q) := αRΓ/α,ICα (P∥Q) . (16)
Note that αRΓ/α,ICα (P∥Q) is non-decreasing in α; see Lemma B.1 for a proof. We now show that the divergences DΓ,ICα are well behaved in the α → ∞ limit, generalizing (5). Taking this limit provides a definition of function-space regularized worst-case regret, along with the following dual variational representation. Theorem 4.5. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
DΓ,IC∞ (P∥Q) := lim α→∞ DΓ,ICα (P∥Q) = inf η∈P (X) {D∞(P∥η) +WΓ(Q, η)} (17)
= sup g∈Γ:g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (18)
We call DΓ,IC∞ the infimal-convolution Γ-worst-case regret (i.e., IC-Γ-WCR). The method of proof of Theorem 4.5 is similar to that of part (1) of Theorem 3.4; see Theorem B.7 in Appendix B for details. Theorem 4.5 suggests that DΓ,ICα is the appropriate α-scaling to use when α is large and we find this to be the case in practice; see the example in Section 6.3.1.
5 ANALYTICAL EXAMPLES AND COUNTEREXAMPLES
In this section we present several analytical examples and counterexamples that illustrate important properties of the IC-Γ-Rényi divergences and demonstrate weaknesses of other attempts to define regularized Rényi divergences. In particular, we show that other attempts at regularizing Rényi divergences fail to inherit important properties from the Γ-IPM. More details on the computations can be found in Appendix C
Infimal convolution and scaling limits: First we present a simple example that illustrates the infimal convolution formula and limiting properties from Sections 3 and 4. LetP = δ0,Qx,c = cδ0+(1−c)δx for c ∈ (0, 1), x > 0, and let Γ = Lip1. Then for L > 0 one can compute
RLΓ,ICα (P∥Qx,c) = (1− c)Lx , 0 < αLx < 1 α−1 − cLx+ α−1 log(αLx) , 1 ≤ αLx ≤ 1/c α−1 log(1/c) , αLx > 1/c . (19)
In particular, it is straightforward to show that RLΓ,ICα (P∥Qx,c) ≤ (1 − c)Lx = WLΓ(Qx,c, P ), limx→0+ R LΓ,IC α (P∥Qx,c) = limx→0+(1 − c)Lx = 0, and limL→∞RLΓ,ICα (P∥Qx,c) = α log(1/c) = Rα(P∥Qx,c). We can also rewrite this in terms of the solution to the infimal convolution problem and take the worst-case-regret scaling limit as follows
RLΓ,ICα (P∥Qx,c) = WLΓ(Qx,c, P ) , 0 < αLx < 1
Rα(P∥Qx,1/(αLx)) +WLΓ(Qx,c, Qx,1/(αLx)) , 1 ≤ αLx ≤ 1/c Rα(P∥Qx,c) , αLx > 1/c ,
lim α→∞
αRΓ/α,ICα (P∥Qx,c) = WΓ(Qx,c, P ) , 0 < x < 1
D∞(P∥Qx,1/x) +WΓ(Qx,c, Qx,1/x) , 1 ≤ x ≤ 1/c D∞(P∥Qx,c) , x > 1/c . (20)
Γ-Rényi-Donsker-Varadhan counterexample: As an alternative to Definition 3.1, one can attempt to regularize the Rényi divergences by restricting the test-function space in the variational representation (3), leading to the Γ-Rényi-Donsker-Varadhan (Γ-Rényi-DV) divergences
RΓ,DVα (P∥Q) := sup ϕ∈Γ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } . (21)
The bound log ∫ ecϕdP ≥ c ∫ ϕdP for all ϕ ∈ Γ, c ∈ R implies that RΓ,DVα ≤ WΓ for α ∈ (0, 1), making (21) a useful regularization of the Rényi divergences in this case; this utility was demonstrated in Pantazis et al. (2022), where it was used to construct GANs. However, estimators built from the representation (21) (i.e., replacing P and Q by empirical measures) are known to be numerically unstable when α > 1. Below we provide a counterexample showing that, unlike for the IC-Γ-Rényi divergences, RΓ,DVα ̸≤WΓ in general when α > 1. We conjecture that this is a key reason for the instability of Γ-Rényi-Donsker-Varadhan estimators when α > 1.
Let Px,c = cδ0 + (1 − c)δx, Q = δ0 for x > 0, c ∈ (0, 1) and ΓL = LipL. Then for α > 1 we have RΓL,DVα (Px,c∥Q) = 1α−1 log (c+ (1− c) exp((α− 1)Lx)) and W
ΓL(Px,c, Q) = (1− c)Lx. Using strict concavity of the logarithm one can then obtain the bound
RΓL,DVα (Px,c∥Q) > WΓL(Px,c, Q) . (22) This shows that, when α > 1, Γ-Rényi-DV violates the key property that allows the IC-Γ-Rényi divergences to inherit properties from the corresponding Γ-IPM. Another alternative is to begin with (3) and then reduce the test-function space to 1α log(Γ), where the logarithm is introduced to eliminate the exponential functions in (21). However, this definition also fails to provide an appropriate regularized Rényi divergence; in particular, it is incapable of meaningfully comparing Dirac distributions. See Appendix C.3 for details. These counterexamples lend further credence to our infimal-convolution based regularization approach (7).
6 NUMERICAL EXPERIMENTS
In this section we present numerical examples that demonstrate the use of the IC-Γ-Rényi divergences for both estimation and training of GANs (additional examples can be found in Appendix D). All of the divergences considered in this paper have a variational representation of the form D(P∥Q) = supg∈ΓH[g;P,Q] for some objective functional H; we use the corresponding estimator
D̂n(P∥Q) := sup θ∈Θ H[gθ;Pn, Qn] (23)
where Pn, Qn are n-sample empirical measures and gθ is a family of neural networks (NN) with parameters θ ∈ Θ. For Lipschitz function spaces we weaken the Lipschitz constraint to a soft 1-sided gradient penalty (see Section 4.1 of Birrell et al. (2022a)). Optimization is performed using the Adam optimizer Kingma & Ba (2014). For the infimal convolution divergences we enforce negativity of the test function (i.e., discriminators) using a final layer having one of the following forms: 1)−abs(x) or 2) −(1/(1− x)1x<0 + (1 + x)1x≥0). The latter, which we term poly-softplus, is C1 and decays like O(x−1) as x→ −∞.
6.1 VARIANCE OF RÉNYI ESTIMATORS
As a first example, we compare estimators of the classical Rényi divergences (i.e., without regularization) constructed from DV-Rényi (3) and CC-Rényi (4) in a simple case where the exact Rényi divergence is known. We let Q and P be 1000-dimensional Gaussians with equal variance and study Rα(P∥Q) as a function of the separation between their means. The results are shown in Figure 1. We see that the estimator based on the convex-conjugate Rényi variational formula 4 has smaller variance and mean-squared error (MSE) that the Rényi-Donsker-Varadhan variational formula 3, with the difference becoming very large when α≫ 1 or when P and Q are far apart (i.e., when µq is large). The Rényi-Donsker-Varadhan estimator only works well when µq and α are both not too large, but even in such cases the convex-conjugate Rényi estimator generally performs better. We conjecture that this difference is due to the presence of risk-sensitive terms in (3) which were eliminated in the new representation (4). We note that the NN for the convex-conjugate Rényi estimator used the poly-softplus final layer, as we found the −abs final layer to result in a significant percentage of failed runs (i.e., NaN outputs) but this issue did not arise when using poly-softplus. We do not show results for either DV-WCR or CC-WCR here as the exact divergence is infinite in this example.
6.2 DETECTION OF RARE SUB-POPULATIONS IN SINGLE-CELL BIOLOGICAL DATASETS
A critical task in cancer assessment is the detection of rare sub-populations subsumed in the overall population of cells. The advent of affordable flow and mass cytometry technologies that perform single cell measurements opens a new direction for the analysis and comparison of high-dimensional cell distributions Shahi et al. (2017) via divergence estimation. We consider single cell mass cytometry measurements on 16 bone marrow protein markers (d = 16) coming from healthy and disease individuals with acute myeloid leukemia Levine et al. (2015). Following Weber et al. (2019), we create two datasets: one with only healthy samples and another one with decreasing percentage of sick cells and compute several divergences. Considering the estimated divergence value as the score of a binary classifier, we compute the ROC curve and the respective area under the ROC curve (AUC) for any pair of sample distributions. More specifically, true negatives correspond to the divergence values between two healthy datasets while true positives correspond to the divergence between a healthy and a diseased dataset. Thus, the AUC is 1.0 when the divergence estimates are completely separable while AUC is 0.5 when they completely overlap. Table 1 reports the AUC values for the scaled IC-Γ-Rényi divergences (16), various levels of rarity and two sample sizes for the datasets. The best performance in the Rényi family is obtained for α = ∞ using the IC-Γ-WCR variational formula (18). IC-Γ-WCR also outperforms the Wasserstein distance of first order in both sample size regimes.
6.3 IC-Γ-RÉNYI GANS
Finally, we study a pair of GAN examples (the second example is presented in Appendix D). Here the goal is to learn a distribution P using a family of generator distribution Qψ ∼ hψ(X) where X is a noise source and hψ is a family of neural networks parametrized by ψ ∈ Ψ, i.e., the goal is to solve
inf ψ∈Ψ
D̂n(P∥Qψ) , (24)
where D̂n is a divergence estimator of the form (23). In particular, we will study the GANs constructed from the newly introduced IC-Γ-Rényi and IC-Γ-WCR GANs and compare them with Wasserstein GAN Gulrajani et al. (2017); Arjovsky et al. (2017).
6.3.1 CIFAR-10
In Figure 2 we demonstrate improved performance of the IC-Γ-Rényi and IC-Γ-WCR GANs, as compared to Wasserstein GAN with gradient penalty (WGAN-GP), on the CIFAR-10 dataset Krizhevsky et al. (2009). The IC GANs also outperform Rényi-DV GAN (21), as the latter is highly unstable when α > 1 and so the training generally encounters NaN after a small number of training epochs (hence we omit those results from the figure). We use the same ResNet neural network architecture as in (Gulrajani et al., 2017, Appendix F) and focus on evaluating the effect of different divergences. Here we let Γ be the set of 1-Lipschitz functions, implement via a gradient penalty. Note that DΓ,IC∞ performs significantly better than RΓ,ICα with large α, and the rescaled D Γ,IC α -GAN performs better that RΓ,ICα -GAN when α is large.
ACKNOWLEDGMENTS
The research of J.B., M.K. and L.R.-B. was partially supported by the Air Force Office of Scientific Research (AFOSR) under the grant FA9550-21-1-0354. The research of M. K. and L.R.-B. was partially supported by the National Science Foundation (NSF) under the grants DMS-2008970 and TRIPODS CISE-1934846. The research of P.D. was partially supported by the NSF under the grant DMS-1904992 and by the AFOSR under the grant FA-9550-21-1-0354. The work of Y.P. was partially supported by the Hellenic Foundation for Research and Innovation (HFRI) through the “Second Call for HFRI Research Projects to support Faculty Members and Researchers” under Project 4753. This work was performed in part using high performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative.
A DERIVATION OF VARIATIONAL FORMULAS FOR THE CLASSICAL RÉNYI DIVERGENCES
In this appendix we provide several variational formulas for the classical Rényi divergences, some of which are new. In the following we let (Ω,M) denote a measurable space, M(Ω) be the space of measurable real-valued functions on Ω, Mb(Ω) the subspace of bounded functions, and P(Ω) will be the space of probability measures on Ω.
First we recall the Rényi-Donsker-Varadhan variational formula derived in Birrell et al. (2021). This is a generalization of the Donsker-Varadhan variational representation of the KL divergence.
Theorem A.1 (Rényi-Donsker-Varadhan Variational Formula). Let P and Q be probability measures on (Ω,M) and α ∈ R, α ̸= 0, 1. Then for any set of functions, Φ, with Mb(Ω) ⊂ Φ ⊂ M(Ω) we have
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } , (25)
where we interpret ∞−∞ ≡ −∞ and −∞+∞ ≡ −∞.
If in addition (Ω,M) is a metric space with the Borel σ-algebra then (25) holds for all Φ that
satisfy Lipb(Ω) ⊂ Φ ⊂ M(Ω), where Lipb(Ω) is the space of bounded Lipschitz functions on Ω (i.e., Lipschitz for any Lipschitz constant L ∈ (0,∞)).
Using Theorem A.1 we can derive a new variational representation that takes the form of a convex conjugate. Theorem A.2 (Convex-Conjugate Rényi Variational Formula). Let P,Q ∈ P(Ω) and α ∈ (0, 1) ∪ (1,∞). Then
Rα(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (26)
If (Ω,M) is a metric space with the Borel σ-algebra then (26) holds with Mb(Ω) replaced by Cb(Ω), the space of bounded continuous real-valued functions on Ω.
Proof. Let Φ = {α−1 log(−h) : h ∈ Mb(Ω), h < 0}. We have Mb(Ω) ⊂ Φ ⊂ M(Ω), hence Theorem A.1 implies
Rα(P∥Q) = sup ϕ∈Φ
{ 1
α− 1 log
∫ e(α−1)ϕdP − 1
α log
∫ eαϕdQ } (27)
= sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ e(α−1)(α
−1 log(−h))dP − 1 α log
∫ eα(α −1 log(−h))dQ } = sup h∈Mb(Ω):h<0 { 1 α− 1 log ∫ |h|(α−1)/αdP − 1 α log ∫ (−h)dQ } .
Note that the second term is finite but the first term is possibly infinite when α ∈ (0, 1). Next use the identity
log(c) = inf z∈R
{z − 1 + ce−z} , c ∈ (0,∞) (28)
in the second term to write
Rα(P∥Q) = sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP − 1
α inf z∈R
{z − 1 + e−z ∫ (−h)dQ} } (29)
=sup z∈R sup h∈Mb(Ω):h<0
{ 1
α− 1 log
∫ |h|(α−1)/αdP + z − 1
α + α−1e−z
∫ hdQ } .
For each z ∈ R make the change variables h = αezg, g ∈ Mb(Ω), g < 0 in the inner supremum to derive
Rα(P∥Q) = sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |αezg|(α−1)/αdP − z − 1
α + α−1e−z
∫ αezgdQ } (30)
=sup z∈R sup g∈Mb(Ω):g<0
{ 1
α− 1 log
∫ |g|(α−1)/αdP + (α−1(logα+ 1) + ∫ gdQ) } = sup g∈Mb(Ω):g<0 {∫ gdQ+ 1 α− 1 log ∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
This completes the proof of (26). The proof of the metric-space version in nearly identical.
Remark A.3. To reverse the above derivation and obtain (25) (with Φ = {ϕ ∈ M(Ω) : ϕ is bounded above}) from (26), change variables g 7→ −c exp(αϕ), ϕ ∈ Φ, c > 0 in (26) and then maximize over c. Corollary A.4. If X is a compact metric space with the Borel σ-algebra, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞) then Cb(X) = C(X) and so
Rα(P∥Q) = sup g∈C(X):g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) . (31)
Next we derive a variational formula for the worst case regret, defined by
D∞(P∥Q) := lim α→∞ αRα(P∥Q) . (32)
Theorem A.5. Let P,Q ∈ P(Ω). Then
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (33)
If Ω is a metric space with the Borel σ-algebra then (33) holds with Mb(Ω) replaced by Cb(Ω).
Remark A.6. Note that on a compact metric space, the space of bounded continuous functions is the same as the space of all continuous functions.
Proof. Recall Van Erven & Harremos (2014)
D∞(P∥Q) =
{ log ( ess supP dP dQ ) , P ≪ Q
∞, P ̸≪ Q . (34)
First suppose P ̸≪ Q. Then there exists a measurable set A with Q(A) = 0 and P (A) > 0. Let gn = −n1A − 1Ac . Then
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gndQ+ log ∫ |gn|dP + 1 (35)
=− nQ(A)−Q(Ac) + log(nP (A) + P (Ac)) + 1 = log(nP (A) + P (Ac)) → ∞ (36)
as n→ ∞. Therefore
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 = ∞ = D∞(P∥Q) . (37)
Now suppose P ≪ Q. Using the definition (32) along with Theorem A.2 and changing variables g = g̃/α we have
D∞(P∥Q) = lim α→∞ αRα(P∥Q) (38)
= lim α→∞
[ sup
g∈Mb(Ω):g<0
{∫ αgdQ+ α
α− 1 log
∫ |g|(α−1)/αdP } + (logα+ 1) ]
≥ lim α→∞
[∫ g̃dQ+ α
α− 1 log
∫ |g̃/α|(α−1)/αdP + (logα+ 1) ] = lim α→∞ [∫ g̃dQ+ α α− 1 log ∫ |g̃|(α−1)/αdP + 1
] = ∫ g̃dQ+ log ∫ |g̃|dP + 1 for all g̃ ∈ Mb(Ω), g̃ < 0.
Here we used the dominated convergence theorem to evaluate the limit. Hence, by maximizing over g̃ we obtain
D∞(P∥Q) ≥ sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 . (39)
To prove the reverse inequality, take any r ∈ (0, ess supP dP/dQ). By definition of the essential supremum we have P (dP/dQ > r) > 0. We also have the bound
P (dP/dQ > r) = ∫ 1dP/dQ>r dP
dQ dQ ≥
∫ 1dP/dQ>rrdQ = rQ(dP/dQ > r) . (40)
For c, ϵ > 0 define gc,ϵ = −c1dP/dQ>r − ϵ. These satisfy gc,ϵ ∈ Mb(Ω), gc,ϵ < 0 and so
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ ∫ gc,ϵdQ+ log ∫ |gc,ϵ|dP + 1 (41)
=− cQ(dP/dQ > r)− ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 ≥− cP (dP/dQ > r)/r − ϵ+ log(cP (dP/dQ > r) + ϵ) + 1 .
Letting ϵ→ 0+ we find
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥− cP (dP/dQ > r)/r + log(cP (dP/dQ > r)) + 1
(42)
for all c > 0. We have P (dP/dQ > r) > 0, hence by maximizing over c > 0 and changing variables to z = cP (dP/dQ > r) we obtain
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ sup
z>0 {−z/r + log(z) + 1} = log(r) . (43)
This holds for all r < ess supP dP/dQ, therefore we can take r ↗ ess supP dP/dQ and use (34) to conclude
sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 ≥ log(ess supP dP/dQ) = D∞(P∥Q) . (44)
Combining this with (39) completes the proof of (33).
Now suppose Ω is a metric space. We clearly have
D∞(P∥Q) = sup g∈Mb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 (45)
≥ sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } + 1 .
To prove the reverse inequality, take any g ∈ Mb(Ω) with g < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ and hϵ ∈ Cb(Ω) such that P (Ecϵ ) ≤ ϵ, Q(Ecϵ ) ≤ ϵ, hϵ|Eϵ = g, and inf g ≤ hϵ ≤ 0. Define gϵ = hϵ − ϵ. Then gϵ < 0, gϵ ∈ Cb(Ω) and we have
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gϵdQ+ log ∫ |gϵ|dP (46)
= ∫ gdQ+ ∫ (hϵ − g)1Ecϵ dQ− ϵ+ log( ∫ |g|dP + ∫ (|hϵ| − |g|)1Ecϵ dP + ϵ)
≥ ∫ gdQ− (sup g − inf g)Q(Ecϵ )− ϵ+ log( ∫ |g|dP + inf gP (Ecϵ ) + ϵ)
≥ ∫ gdQ− (sup g − inf g)ϵ− ϵ+ log( ∫ |g|dP + inf gϵ+ ϵ) .
Taking the limit ϵ→ 0+ we therefore obtain
sup g∈Cb(Ω):g<0
{∫ gdQ+ log ∫ |g|dP } ≥ ∫ gdQ+ log ∫ |g|dP . (47)
This holds for all g ∈ Mb(Ω) with g < 0, hence by taking the supremum over g we obtain the reverse inequality to (45). This completes the proof.
B PROOFS
In this appendix we provide a number of proofs that were omitted from the main text. Recall that X denotes a compact metric space.
Lemma B.1. Let Γ ⊂ C(X) and P,Q ∈ P(X). Then αRΓ/α,ICα (P∥Q) is non-decreasing in α ∈ (0, 1) ∪ (1,∞). If 0 ∈ Γ then αRΓ,ICα (P∥Q) is also non-decreasing.
Proof. If 0 ∈ Γ then WΓ ≥ 0, hence
αRΓ,ICα (P∥Q) = inf η∈P(X) {αRα(P∥η) + αWΓ(Q, η)} (48)
where both α 7→ αRα(P∥η) and α 7→ αWΓ(Q, η) are non-decreasing. Therefore the infimum is as well. The proof for α 7→ αRΓ/α,ICα (P∥Q) is similar, though it doesn’t require the assumption 0 ∈ Γ due to the identity αWΓ/α =WΓ.
Next we prove a key lemma that is used in our main result. First recall the definition ΛPα [g] := ∞1g ̸<0 − ( 1
α− 1 log
∫ |g|(α−1)/αdP + α−1(logα+ 1) ) 1g<0 , g ∈ C(X). (49)
Lemma B.2. ΛPα is convex and is continuous on {g ∈ C(X) : g < 0}, an open subset of C(X).
Proof. First we prove convexity. Let g0, g1 ∈ {C(X) : g < 0} and λ ∈ (0, 1). For α ∈ (0, 1) we can use the inequality λa+ (1− λ)b ≥ aλb1−λ for all a, b > 0 to compute
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP ≤ − 1
α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP . (50)
Using Hölder’s inequality with exponents p = 1/λ, q = 1/(1− λ) we then obtain
− 1 α− 1 log
∫ (|g1|λ|g0|1−λ)(α−1)/αdP (51)
≤− 1 α− 1 log
(∫ |g1|(α−1)/αdPλ ∫ |g0|(α−1)/αdP 1−λ ) =λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
Therefore g 7→ − 1α−1 log ∫ |g|(α−1)/αdP is convex on {g < 0}. This proves ΛPα is convex when α ∈ (0, 1).
Now suppose α > 1. The map t > 0, t 7→ t(α−1)/α is concave and − log is decreasing and convex, hence
− 1 α− 1 log
∫ |λg1 + (1− λ)g0|(α−1)/αdP (52)
≤− 1 α− 1 log
( λ ∫ |g1|(α−1)/αdP + (1− λ) ∫ |g0|(α−1)/αdP ) ≤λ ( − 1 α− 1 log ∫ |g1|(α−1)/αdP ) + (1− λ) ( − 1 α− 1 log ∫ |g0|(α−1)/αdP ) .
This proves that ΛPα is also convex when α > 1. Openness of {g < 0} follows from the assumption that X is compact and so any strictly negative continuous function is strictly bounded away from zero. Continuity on {g < 0} then follows from the dominated convergence theorem.
Now we prove our main theorem, deriving the dual variational formula and other important properties of the IC-Γ-Rényi divergences. Theorem B.3. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then:
1.
RΓ,ICα (P∥Q) = sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
(53)
2. If (53) is finite then there exists η∗ ∈ P(X) such that
RΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) +WΓ(Q, η)} = Rα(P∥η∗) +WΓ(Q, η∗) . (54)
3. RΓ,ICα (P∥Q) is convex in Q. If α ∈ (0, 1) then RΓ,ICα (P∥Q) is jointly convex in (P,Q).
4. (P,Q) 7→ RΓ,ICα (P∥Q) is lower semicontinuous.
5. RΓ,ICα (P∥Q) ≥ 0 with equality if P = Q.
6. RΓ,ICα (P∥Q) ≤ min{Rα(P∥Q),WΓ(Q,P )}.
7. If Γ is strictly admissible then RΓ,ICα has the divergence property.
Proof. 1. Define F,G : C(X) → (−∞,∞] by F = ΛPα and G[g] = ∞1g ̸∈Γ −EQ[g]. Using the assumptions on Γ along with Lemma B.2 we see that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore Fenchel-Rockafellar duality (see, e.g., Theorem 4.4.3 in Borwein & Zhu (2006)) along with the identity C(X)∗ =M(X) gives
sup g∈C(X) {−F [g]−G[g]} = inf η∈M(X)
{F ∗[η] +G∗[−η]} , (55)
and if either side is finite then the infimum on the right hand side is achieved at some η∗ ∈M(X). Using the definitions, we can rewrite the left hand side as follow
sup g∈C(X)
{−F [g]−G[g]} (56)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
We can also compute
G∗[−η] = sup g∈C(X)
{− ∫ gdη − (∞1g ̸∈Γ − EQ[g])} =WΓ(Q, η) . (57)
Therefore
inf η∈M(X)
{(ΛPα )∗[η] +WΓ(Q, η)} (58)
= sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) .
Next we show that the infimum over M(X) can be restricted to P(X). First suppose η ∈ M(X) with η(X) ̸= 1. Then, using the assumption that Γ contains the constant functions, we have
WΓ(Q, η) ≥ EQ[±n]− ∫ ±ndη = ±n(1− η(X)) → ∞ (59)
as n→ ∞ (for appropriate choice of sign). Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. This implies that the infimum can be restricted to {η ∈M(X) : η(X) = 1}. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function
gϵ ∈ C(X) such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0}, hence
(ΛPα ) ∗[η] ≥ ∫ gn,ϵdη + 1
α− 1 log
∫ |gn,ϵ|(α−1)/αdP + α−1(logα+ 1) (60)
=− nη(A) + nη(A ∩ Ecϵ )− n ∫ gϵ1Ecϵ dη − η(X)
+ 1
α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1)
≥n(|η(A)| − 2ϵ)− η(X) + 1 α− 1 log
∫ |ngϵ + 1|(α−1)/αdP + α−1(logα+ 1) .
If α > 1 then log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and if α ∈ (0, 1) then log ∫ |ngϵ +
1|(α−1)/αdP ≤ 0. In either case we have 1α−1 log ∫ |ngϵ + 1|(α−1)/αdP ≥ 0 and so
(ΛPα ) ∗[η] ≥n(|η(A)| − 2ϵ)− η(X) + α−1(logα+ 1) . (61)
By choosing ϵ < |η(A)|/2 and taking n → ∞ we see that (ΛPα )∗[η] = ∞ whenever η ∈ M(X) is not positive. Therefore the infimum can further be restricted to positive measures. Combining these results we find
sup g∈Γ:g<0
{∫ gdQ+ 1
α− 1 log
∫ |g|(α−1)/αdP } + α−1(logα+ 1) (62)
= inf η∈P(X)
{(ΛPα )∗[η] +WΓ(Q, η)} .
For η ∈ P(X), equation (10) implies (ΛPα )∗[η] = Rα(P∥η). This completes the proof.
2. The existence of a minimizer follows from Fenchel-Rockafellar duality; again, see Theorem 4.4.3 in Borwein & Zhu (2006).
3. This follows from (53) together with the fact that the supremum of convex functions is convex and y 7→ 1α−1 log(y) is convex when α ∈ (0, 1).
4. Compactness of X implies that g and |g|(α−1)/α are bounded and continuous whenever g ∈ Γ satisfies g < 0. Therefore Q→ ∫ gdQ and P → ∫ |g|(α−1)/αdP are continuous in
the weak topology on P(X). Therefore the objective functional in (53) is continuous in (P,Q). The supremum is therefore lower semicontinuous.
5. This easily follows from the definition (7).
6. Rα is a divergence, hence is non-negative. Γ contains the constant functions, henceWΓ ≥ 0. Therefore RΓ,ICα ≥ 0. If Q = P then 0 ≤ RΓ,ICα (P∥Q) ≤ Rα(P∥P ) +WΓ(P, P ) = 0, hence RΓ,ICα (P∥Q) = 0.
7. Suppose Γ is strictly admissible. Due to part 5 of this theorem, we only need to show that if RΓ,ICα (P∥Q) = 0 then P = Q. If RΓ,ICα (P∥Q) = 0 then part 2 implies there exists η∗ ∈ P(X) such that
0 = Rα(P∥η∗) +WΓ(Q, η∗) . (63)
Both terms are non-negative, hence Rα(P∥η∗) = 0 =WΓ(Q, η∗). Rα has the divergence property, hence η∗ = P . So WΓ(Q,P ) = 0. Therefore 0 ≥ ∫ gdQ− ∫ gdP for all g ∈ Γ. Let Ψ be as in the definition of strict admissibility and let ψ ∈ Ψ. There exists c ∈ R, ϵ > 0 such that c± ϵψ ∈ Γ and so 0 ≥ ±ϵ( ∫ ψdQ− ∫ ψdP ). Therefore ∫ ψdQ = ∫ ψdP for all ψ ∈ Ψ. Ψ is P(X)-determining, hence Q = P .
Remark B.4. The Fenchel-Rockafellar Theorem applies under two different sets of assumptions: the first assumes both mappings are lower semicontinuous (LSC) while the second applies when one mapping is continuous at a point where both are finite. The mapping ΛPα , as defined by (49) and
appearing in (10), is not LSC but it is continuous on its domain, hence we used the second version of Fenchel-Rockafellar in our proof of Theorem B.3. For α > 1 one could alternatively redefine ΛPα along the boundary of {g < 0} to make it LSC while still maintaining the relation (10) and thereby utilize the first version of Fenchel-Rockafellar. This alternative approach is also amenable to extending the theorem to non-compact spaces, using the methods from Dupuis, Paul & Mao, Yixiang (2022); Birrell et al. (2022a). However, these methods do not apply to α ∈ (0, 1). With this in mind, in order to provide a simple unified treatment of all α ∈ (0, 1) ∪ (1,∞) we structured our proof around the second version of Fenchel-Rockafellar.
Despite the fact that ΛPα is not LSC, the Fenchel-Rockafellar Theorem does imply that convex duality holds at all points of continuity in the domain, i.e., one has
ΛPα [g] = sup η∈M(X)
{ ∫ gdη −Rα(P∥η)} for all g < 0 , (64)
but this duality formula doesn’t necessarily hold if g ̸< 0. Here, Rα(P∥η) for general η ∈M(X) is defined via the variational formula
Rα(P∥η) := (ΛPα )∗[η] = sup g∈C(X)
{ ∫ gdη − ΛPα [g]} (65)
and one can rewrite this in terms of the classical Rényi divergence as follows
Rα(P∥η) = { ∞ if η ̸≥ 0 or η = 0 , Rα(P∥ η∥η∥ )− 1 α log ∥η∥ if η is a nonzero positive measure.
(66)
Next we prove the limiting results from Theorem 4.1. Theorem B.5. Let Γ ⊂ C(X) be admissible, P,Q ∈ P(X), and α ∈ (0, 1) ∪ (1,∞). Then
lim δ→0+
1 δ RδΓ,ICα (P∥Q) =WΓ(Q,P ) (67)
and if Γ is strictly admissible we have
lim L→∞
RLΓ,ICα (P∥Q) = Rα(P∥Q) . (68)
Proof. It is straightforward to show that the scaled function spaces are admissible and W cΓ = cWΓ for all c > 0. First we prove (67). From the definition 7 we have
δ−1RδΓ,ICα (P∥Q) = inf η∈P(X) {δ−1Rα(P∥η) +WΓ(Q, η)} ≤WΓ(Q,P ) (69)
and so δ−1RδΓ,ICα (P∥Q) is non-increasing in δ. Therefore
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = sup δ>0 δ−1RδΓ,ICα (P∥Q) (70)
and
lim δ→0+
δ−1RδΓ,ICα (P∥Q) ≤WΓ(Q,P ) . (71)
We will assume this inequality is strict and derive a contradiction. This assumption, together with (70), implies RδΓ,ICα (P∥Q) <∞ for all δ > 0. Part (2) of Theorem 3.4 then implies the existence of η∗,δ ∈ P(X) such that
δ−1RδΓ,ICα (P∥Q) = δ−1Rα(P∥η∗,δ) +WΓ(Q, η∗,δ) ≥WΓ(Q, η∗,δ) . (72) Take a sequence δn → 0+. We have assumed X is compact, hence P(X) is also compact and so there exists a weakly convergent subsequence η∗,δnj → η∗. From the variational formulas (25) and (8) we see that Rα(P∥·) and WΓ(Q, ·) are lower semicontinuous, hence lim infjWΓ(Q, η∗,δnj ) ≥ WΓ(Q, η∗) and
Rα(P∥η∗) ≤ lim inf j Rα(P∥η∗,δnj ) ≤ lim infj δnj (δ −1 nj Rα(P∥η∗,δnj ) +W Γ(Q, η∗,δnj )) (73)
= lim inf j
δnj (δ −1 nj R δnjΓ,IC α (P∥Q)) = 0 , (74)
where the last equality follows from the assumed strictness of the inequality (70). Therefore the divergence property for the classical Rényi divergences implies Rα(P∥η∗) = 0 and P = η∗. Combining the above results we obtain
lim δ→0+ δ−1RδΓ,ICα (P∥Q) = lim j→∞ δ−1nj R δnjΓ,IC α (P∥Q) ≥ lim inf j WΓ(Q, η∗,δnj ) (75)
≥WΓ(Q, η∗) =WΓ(Q,P ) . This contradicts (71) and therefore we have proven the equality (67).
Now we assume Γ is strictly admissible and will prove (68) via similar reasoning. From the definition 7 we see that
RLΓ,ICα (P∥Q) = inf η∈P(X) {Rα(P∥η) + LWΓ(Q, η)} ≤ Rα(P∥Q) (76)
and RLΓ,ICα is non-decreasing in L. Hence limL→∞R LΓ,IC α (P∥Q) = supL>0RLΓ,ICα (P∥Q) and
lim L→∞
RLΓ,ICα (P∥Q) ≤ Rα(P∥Q) . (77)
Suppose this inequality is strict. Then RLΓ,ICα (P∥Q) < ∞ for all L and we can use part (2) of Theorem 3.4 to conclude there exists η∗,L ∈ P(X) such that
RLΓ,ICα (P∥Q) = Rα(P∥η∗,L) + LWΓ(Q, η∗,L) . (78) Take Ln → ∞. Compactness of P(X) implies the existence of a weakly convergent subsequence η∗,j := η∗,Lnj → η∗ ∈ P(X). Lower semicontinuity of Rα(P∥·) and W
Γ(Q, ·) imply lim infj Rα(P∥η∗,j) ≥ Rα(P∥η∗) and
WΓ(Q, η∗) ≤ lim inf j WΓ(Q, η∗,j) = lim inf j L−1nj W LnjΓ(Q, η∗,j) (79)
≤ lim inf j
L−1nj R LnjΓ,IC α (P∥Q) = 0 ,
where the last equality follows from the assumed strictness of the inequality (77). Therefore WΓ(Q, η∗) = 0. Γ is strictly admissible, hence Q = η∗ (see the proof of part (7) of Theorem 3.4). Combining these results we see that
lim L→∞ RLΓ,ICα (P∥Q) = lim j R LnjΓ,IC α (P∥Q) = lim j (Rα(P∥η∗,Lnj ) + LnjW Γ(Q, η∗,Lnj )) (80)
≥ lim inf j Rα(P∥η∗,j) ≥ Rα(P∥η∗) = Rα(P∥Q) .
This contradicts the assumed strictness of the inequality (77) and hence (77) is an equality. This completes the proof.
Next we prove Theorem 4.2, regarding the α→ 1 limit of the IC-Γ-Rényi divergences. Theorem B.6. Let Γ ⊂ C(X) be admissible and P,Q ∈ P(X). Then
lim α→1+ RΓ,ICα (P∥Q) = inf η∈P(X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} , (81)
lim α→1− RΓ,ICα (P∥Q) = inf η∈P(X) {R(P∥η) +WΓ(Q, η)} (82)
= sup g∈Γ:g<0
{ ∫ gdQ+ ∫ log |g|dP}+ 1 . (83)
Proof. Lemma B.1 implies α 7→ αRΓ,ICα (P∥Q) is non-decreasing on (1,∞), therefore lim α→1+ αRΓ,ICα (P∥Q) = inf α>1 αRΓ,ICα (P∥Q) (84)
= inf α>1 inf η∈P(X)
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) inf α>1
{αRα(P∥η) + αWΓ(Q, η)}
= inf η∈P(X) { lim α→1+ αRα(P∥η) +WΓ(Q, η)} .
From Van Erven & Harremos (2014) we have
lim α→1+ Rα(P∥η) = { R(P∥η) if ∃β > 1, Rβ(P∥η) <∞ ∞ otherwise (85)
and so we can conclude
lim α→1+ RΓ,ICα (P∥Q) = lim α→1+ αRΓ,ICα (P∥Q) = inf η∈P (X):
∃β>1,Rβ(P∥η)<∞
{R(P∥η) +WΓ(Q, η)} . (86)
This proves (81).
Now we compute the limit as α → 1−. Note that the limit exists due to the fact that α 7→ αRΓ,ICα (P∥Q) is non-decreasing. From the definition (7), for all η ∈ P(X) we have
lim α→1− RΓ,ICα (P∥Q) ≤ lim α→1− Rα(P∥η) +WΓ(Q, η) = R(P∥η) +WΓ(Q, η) . (87)
Here we used the fact that limα→1− Rα(P∥η) = R(P∥η) (see Van Erven & Harremos (2014)). Maximizing over η then gives
lim α→1− RΓ,ICα (P∥Q) ≤ inf η∈P(X) {R(P∥η) +WΓ(Q, η)} . (88)
To prove the reverse inequality, use part 1 of Theorem 3.4 to compute
lim α→1− RΓ,ICα (P∥Q) = lim α→1− αRΓ,ICα (P∥Q) (89)
= lim α→1− sup g∈Γ:g<0
{ α ∫ gdQ+ α
α− 1 log
∫ |g|(α−1)/αdP + logα+ 1 } ≥ ∫ gdQ+ lim
α→1− α α− 1 log
∫ |g|(α−1)/αdP + 1
= ∫ gdQ+ d
dy |y=0 log
∫ ey log |g|dP + 1
= ∫ gdQ+ ∫ log |g|dP + 1
for all g ∈ Γ, g < 0. Therefore, maximizing over g gives
lim α→1− RΓ,ICα (P∥Q) ≥ sup g∈Γ:g<0
{∫ gdQ+ ∫ log |g|dP } + 1 . (90)
We now use Fenchel-Rockafellar duality (Theorem 4.4.3 in Borwein & Zhu (2006)) to compute the dual variational representation of the right hand side of (90). Define F,G : C(X) → (−∞,∞] by F [g] = ∞1g ̸<0 − ∫ log |g|dP and G[g] = ∞1g ̸∈Γ − EQ[g]. It is stratightforward to show that F and G are convex, F [−1] <∞, G[−1] <∞, and F is continuous at −1. Therefore
inf g∈C(X) {F [g] +G[g]} = sup η∈E∗
{−F ∗(−η)−G∗(η)} , (91)
i.e.
sup g∈Γ:g<0
{EQ[g] + ∫
log |g|dP} = inf η∈M(X)
{F ∗(η) +WΓ(Q, η)} , (92)
where F ∗(η) = supg∈C(X):g<0{ ∫ gdη + ∫ log |g|dP}. Now we show the infimum can be restricted to η ∈ P(X): If η(X) ̸= 1 then by taking g = ±n we find
WΓ(Q, η) ≥ n|Q(X)− η(X)| → ∞ (93)
as n→ ∞. Therefore WΓ(Q, η) = ∞ if η(X) ̸= 1. Now suppose η ∈ M(X) is not positive. Take a measurable set A with η(A) < 0. By Lusin’s theorem, for all ϵ > 0 there exists a closed set Eϵ ⊂ X and a continuous function gϵ ∈ C(X)
such that |η|(Ecϵ ) < ϵ, 0 ≤ gϵ ≤ 1, and gϵ|Eϵ = 1A. Define gn,ϵ = −ngϵ − 1, n ∈ Z+. Then gn,ϵ ∈ {g ∈ C(X) : g < 0 | 1. What is the focus of the paper regarding Renyi divergences?
2. What are the strengths of the proposed regularized family of Renyi divergences?
3. Do you have any concerns or suggestions for improvements regarding the paper's experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
Note: These questions aim to identify the primary concerns and inquiries that the review addresses without being too specific. They should not be considered a comprehensive list but rather an attempt to initiate further discussion and analysis. | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper discusses a regularized family of Renyi divergences using a variational function form. For this family of divergences the paper discusses various properties relating to limits, interpolations, data processing inequality. Examples of the importance of using the new variational Renyi divergences is demonstrated through examples (and counterexamples). Finally, the paper ends with a discussion of numerical applications: a) the variance of the convex-conjugate Renyi divergence estimators are much lower than that of estimators designed using the Renyi-Donsker-Varadhan variational formula when risk sensitive terms are detrimental to the DV-Renyi estimators; b) proposed infimal-convolution divergences are more suitable in the detection of rare sub-populations for a particular dataset; c) the proposed infimal-convolution divergences can perform better than Wasserstein GAN on the CIFAR-10 dataset; and d) the proposed divergences provides improvement on the ROTMNIST dataset.
Strengths And Weaknesses
Overall the paper is nicely written and describes well Renyi divergence and its background, and provides a good description of the paper's main results. In particular, the paper provides an interesting development to the study of Renyi divergences by introducing a new family of regularized Renyi divergences.
Some minor comments on the strengths of the paper are:
The paper includes clear description of definitions and main results. The highlights of contributions on Page 2 were very helpful to understanding the paper.
The paper addresses Renyi divergences, an interesting and important topic, and provides interesting development to available results. The variational formulations of the Renyi divergence and the corresponding regularizations using an IPM (i.e., (6)) are very interesting.
The paper considers a variety of numerical experiments, demonstrating flexibility in applications.
Some minor comments on the weaknesses of the paper are:
In the first experiment (Section 6.1), it would be interesting to see if CC-Renyi also performs better than DV-Renyi when distributions are close. In the experiment shown the two Gaussians are chosen to be N(2,0.5) and N(0,1) respectively and relative variance and MSE numbers are quite extreme for DV-Renyi -- it would be interesting if CC-Renyi also improves upon DV-Renyi for scenarios where say the two Gaussians are N(0,1) and N(0.1,1), for example.
The first section in Section 5 may benefit from some more explanation.
It may be helpful to adjust some of the citation formatting to make the paper more readable. For example, in the very first sentence of the Introduction, the two citations are a bit confusing to the overall readability of the sentence.
The details to the last experiment seem to be slightly lacking; it would be helpful to add some more description to the main text.
Clarity, Quality, Novelty And Reproducibility
Overall, the paper was clearly written (bar a few of the comments listed in the above section). The quality and clarity are good. In terms of originality, the main results in this paper builds on the work within the literature and provides interesting new results. |
ICLR | Title
Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Abstract
Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid.
1 INTRODUCTION
Deep reinforcement learning (RL) is widely applied in various domains (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Lillicrap et al., 2015; Andrychowicz et al., 2017; Zha et al., 2019a; Liu et al., 2020). However, RL algorithms often require tremendous number of samples to achieve reasonable performance (Hessel et al., 2018). This sample efficiency issue becomes more pronounced in sparse reward environments, where the agent may take an extremely long time before bumping into a reward signal (Riedmiller et al., 2018). Thus, how to efficiently explore the environment under sparse reward remains an open challenge (Osband et al., 2019).
To address the above challenge, many exploration methods have been investigated and demonstrated to be effective on hard-exploration environments. One of the most popular techniques is to use intrinsic rewards to encourage exploration (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Guo et al., 2016; Zheng et al., 2018; Du et al., 2019). The key idea is to give intrinsic bonus based on uncertainty, e.g., assigning higher rewards to novel states (Ostrovski et al., 2017), or using the prediction error of a dynamic model as the intrinsic reward (Pathak et al., 2017). While many intrinsic reward methods have demonstrated superior performance on hard-exploration environments, such as Montezuma’s Revenge (Burda et al., 2018b) and Pitfall! (Badia et al., 2019), most of the previous studies use the same singleton environment for training and testing, i.e., the agent aims to solve the same environment in each episode. However, recent studies show that the agent trained in this way is susceptible to overfitting and may fail to generalize to even a slightly different environment (Rajeswaran et al., 2017; Zhang et al., 2018a). To deal with this issue, a few procedually-generated environments
are designed to test the generalization of RL, such as (Beattie et al., 2016; Chevalier-Boisvert et al., 2018; Nichol et al., 2018; Côté et al., 2018; Cobbe et al., 2019; Küttler et al., 2020), in which the agent aims to solve the same task, but a different environment is generated in each episode.
Unfortunately, encouraging exploration in procedually-generated environments is a very challenging task. Many intrinsic reward methods, such as count-based exploration and curiosity-driven exploration, often fall short in procedually-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020). Figure 1a shows a motivating example of why count-based exploration is less effective in procedually-generated environments. We consider a 1D grid-world, where the agent (red) needs to explore the environment to reach the goal within 7 timesteps, that is, the agent needs to move right in all the steps. While count-based exploration assigns reasonable intrinsic rewards in singleton setting, it may generate misleading intrinsic rewards in procedually-generated setting because visiting a novel state does not necessarily mean a good exploration behavior. This issue becomes more pronounced in more challenging procedually-generated environments, where the agent is not likely to visit the same state more than once. To tackle the above challenge, this work studies how to reward good exploration behaviors that can generalize to different environments.
When a human judges how well an agent explores the environment, she often views the agent from the episode-level instead of the state-level. For instance, one can easily tell whether an agent has explored a maze well by looking into the coverage rate of the current episode, even if the current environment is different from the previous ones. In Figure 1b, we can confidently tell that the episode in the right-hand-side is a good exploration behavior because the agent achieves 0.875 coverage rate. Similarly, we can safely conclude that the episode in the left-hand-side does not explore the environment well due to its low coverage rate. Motivated by this, we hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors than state-level intrinsic rewards in procedually-generated setting.
To verify our hypothesis, we propose exploration via Ranking the Episodes (RAPID). Specifically, we identify good exploration behaviors by assigning episodic exploration scores to past episodes. To efficiently learn the past good exploration behaviors, we use a ranking buffer to store the highlyscored episodes. The agent then learns to reproduce the past good exploration behaviors with imitation learning. We demonstrate that RAPID significantly outperforms the state-of-the-art intrinsic reward methods on several procedually-generated benchmarks. Moreover, we present extensive ablations and analysis for RAPID, showing that RAPID can well explore the environment even without extrinsic rewards and could be generally applied in tasks with continuous state/action spaces.
2 EXPLORATION VIA RANKING THE EPISODES
An overview of RAPID is shown in Figure 2. The key idea is to assign episodic exploration scores to past episodes and store those highly-scored episodes into a small buffer. The agent then treats the episodes in the buffer as demonstrations and learns to reproduce these episodes with imitation learning. RAPID encourages the agent to explore the environment by reproducing the past good
exploration behaviors. This section introduces how we define the episodic exploration score for procedurally-generated environments and how the proposed method can be combined with state-ofthe-art reinforcement learning agents.
2.1 EPISODIC EXPLORATION SCORE
Each episode is scored from both local and global perspectives. The local score is a per-episode view of the exploration behavior, which measures how well the current episode itself explores the environment. The global score provides a long-term and historical view of exploration, i.e., whether the current episode has explored the regions that are not well explored in the past. Additionally, we consider extrinsic reward, which is the episodic reward received from the environment. It is an external criterion of exploration since a high extrinsic reward often suggests good exploration in sparse environments. The overall episodic exploration score is obtained by the weighted sum of the above three scores. We expect that these scores can model different aspects of exploration behaviors and they are complementary in procedurally-generated environments.
Local Score. The intuition of the local exploration bonus is that the agent is expected to visit as many distinct states as possible in one episode. In our preliminary experiments on MiniGrid, we observe that many poor exploration episodes repeatedly visit the same state. As a result, the agent may get stuck in a well-known region but never reach out to unexplored regions. From a local view, we can quantify this exploration behavior based on the diversity of the visited states. Specifically, we define the local exploration score as
Slocal = Ndistinct Ntotal , (1)
where Ntotal is the total number of states in the episode, and Ndistinct is the number of distinct states in the episode. Intuitively, optimizing the local score will encourage the agent to reach out to the unexplored regions in the current episode. In an ideal case, the episodes that never visit the same state twice will receive a local bonus of 1. There could be various ways to extend the above definition to continuous state space. In this work, we empirically define the local score in continuous state space as the mean standard deviation of all the states in the episode:
Slocal =
∑l i=1 std(si)
l , (2)
where l is the dimension of the state, and std(si) is the standard deviation along the i-dimension of the states in the episode. Intuitively, this score encourages the episodes that visit diverse states.
Global Score. The global score is designed to model long-term exploration behavior. While a different environment will be generated in each episode in the procedurally-generated setting, the environments in different episodes usually share some common characteristics. For example, in the
Algorithm 1 Exploration via Ranking the Episodes (RAPID) 1: Input: Training steps S, buffer size D, RL rollout steps T 2: Initialize the policy πθ, replay buffer D 3: for iteration = 1, 2, ... until convergence do 4: Execute πθ for T timesteps 5: Update πθ with RL objective (also update value functions if any) 6: for each generated episode τ do 7: Compute episodic exploration score Sτ based on Eq. (4) 8: Give score Sτ to all the state-action pairs in τ and store them to the buffer 9: Rank the state-action pairs in D based on their exploration scores 10: if D.length > D then 11: Discard the state-action pairs with low scores so that D.length = D 12: end if 13: for step = 1, 2, .., S do 14: Sample a batch from D and train πθ using the data in D with behavior cloning 15: end for 16: end for 17: end for
MiniGrid MultiRoom environment (Figure 3a), although a different maze will be generated in a new episode, the basic building blocks (e.g., walls, doors, etc.), the number of rooms, and the size of each room will be similar. From a global view, the agent should explore the regions that are not well explored in the past. Motivated by count-based exploration, we define the global score as
Sglobal = 1
Ntotal ∑ s 1√ N(s) , (3)
where N(s) is the state count of s throughout the training process, that is, the global score is the mean count-based intrinsic reward of all the states in the episode.
Episodic Exploration Score. Suppose Sext is the total extrinsic reward of an episode received from the environment . The episodic exploration score is obtained by the weighted sum of the extrinsic reward Sext, the local score Slocal, and the global score Sglobal:
S = w0Sext + w1Slocal + w2Sglobal, (4) where w0, w1 and w2 are hyperparameters. We speculate that the three scores are complementary and the optimal w0, w1 and w2 could vary in different environments. For example, a higher w1 will give a higher weight to the per-episode view of exploration behavior. A higher w2 will focus more on whether the current episode has explored the past unexplored regions. Nevertheless, we find that a single set of w0, w1 and w2 works well across many tasks.
Comparison with Prior Work. Several papers have studied episode-level feedback to improve the sample efficiency of RL. To the best of our knowledge, the existing work mainly focuses on learning a meta-policy based on episodic rewards (Xu et al., 2018; Zha et al., 2019b), or imitating the episodes with high episodic rewards (Guo et al., 2018; Srivastava et al., 2019). However, in very sparse environments, it is not likely to receive a non-zero reward without exploration. These methods are not suitable for very sparse environments since episodic rewards could be always zero. In this work, we propose local and global episodic scores to quantify the exploration behaviors and encourage the agent to visit distinct states from per-episode and long-term views.
2.2 REPRODUCING PAST GOOD EXPLORATION BEHAVIORS WITH IMITATION LEARNING
A straightforward idea is to treat the episodic exploration score as a reward signal and assign it to the last timestamp of the episode. In this way, the agent can be updated with any reinforcement learning objective. However, this strategy is not very effective with three potential limitations. First, the reward remains sparse so that the reinforcement learning agents may not be able to effectively exploit the reward signals. Second, the intrinsic exploration bonus may introduce bias in the objective. While the local score and global score may encourage exploration at the beginning stage, they are
not true objectives and may degrade the final performance. Third, it only uses a good episode once. In fact, many past good episodes can be exploited many times to improve sample efficiency.
To overcome the above limitations, we propose to use a small ranking buffer to store past good state-action pairs and then use imitation objective to enhance exploration. Specifically, we assign the same episodic exploration score to all the state-action pairs in an episode and rank all the stateaction pairs based on scores. In addition to the original RL objective, we employ behavior cloning, the simplest form of imitation learning, to encourage the agent to reproduce the state-action pairs in the buffer. This simple ranking buffer can effectively exploit the past good experiences since a good state-action pair could be sampled more than once. The procedure is summarized in Algorithm 1. A possible variant is to keep the entire episodes in the buffer. We have tried this idea and do not observe a clear difference between this variant and Algorithm 1 (see Appendix H for more discussions).
3 EXPERIMENTS
The experiments are designed to answer the following research questions: RQ1: how is RAPID compared with the state-of-the-art intrinsic reward methods on benchmark procedurally-generated environments (Section 3.2)? RQ2: how will RAPID perform if removing one of the scores (i.e., the local score, the global score, and the extrinsic reward) or the buffer (Section 3.3)? RQ3: how will the hyperparameters impact the performance of RAPID (Section 3.3)? RQ4: can RAPID explore the environment without extrinsic reward (Section 3.3)? RQ5: how will RAPID perform with larger and more challenging grid-world (Section 3.3)? RQ6: can RAPID generalize to 3D navigation task and continuous state/action space, such as MuJoCo (Section 3.4)?
3.1 ENVIRONMENTS AND BASELINES
Figure 3 shows the rendering of procedurally-generated environments used in this work. Following the previous studies of exploration in procedurally-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020), we mainly focus on MiniGrid environments (ChevalierBoisvert et al., 2018), which provide procedurally-generated grid-worlds. We consider two types of hard exploration tasks: MultiRoom-NX-SY, where X and Y indicate the number of rooms and the room size, respectively, and KeyCorridor-SX-RY, where X and Y indicate the room size and the number of rows, respectively. A large X or Y suggests a larger environment, which will be more difficult to explore. The agent has a partial view of the environment and needs to navigate the green block or the red ball under sparse rewards. These tasks are challenging because the agent needs to explore many rooms in order to receive a reward signal so that standard RL algorithms will typically fail. To test whether RAPID can generalize to continuous state space, we consider MiniWorld Maze (Chevalier-Boisvert, 2018), whose observation space is 60 × 80 images. Similarly, the agent in MiniWorld Maze has a 3D partial view of the environment and needs to navigate the red box. The maze is procedurally-generated, i.e., a different maze will be generated in each episode. We further study RAPID in continuous state and action spaces on sparse MuJoCo tasks (Todorov et al., 2012; Brockman et al., 2016). More details of the above environments are described in Appendx B.
While exploration has been extensively studied in the literature, very few methods are designed for procedurally-generated environments. To better understand the performance of RAPID, we consider the baselines in the following categories. Traditional Methods: we consider several popular methods designed for singleton environments, including count-based exploration (COUNT) (Bellemare et al., 2016), Random Network Distillation Exploration (RANDOM) (Burda et al., 2018b),
.
and curiosity-driven exploration (CURIOSITY) (Pathak et al., 2017). Methods for ProcedurallyGenerated Environments: we consider two state-of-the-art exploration methods, including ImpactDriven Exploration (RIDE) (Raileanu & Rocktäschel, 2020), and exploration with Adversarially Motivated Intrinsic Goals (AMIGO) (Campero et al., 2020). Self-Imitation Methods: self-imitation learning (SIL) (Oh et al., 2018) also exploits past good experiences to encourage exploration. We further include PPO (Schulman et al., 2017) as a reinforcement learning baseline. All the algorithms are run the same number timesteps for a fair comparison. More details are provided in Appendix A.
3.2 MINIGRID RESULTS
To study RQ1, we plot the learning curves of RAPID and the baselines on MiniGrid benchmarks in Figure 4. We can make three observations. First, RAPID performs significantly better than the state-of-the-art methods designed for singleton or procedurally-generated environments in terms of sample efficiency and final performance. For example, on MultiRoom-N7-S8, RAPID is at least 10 times more sample efficient than RIDE. On KeyCorridor-S4-R3, RAPID is the only algorithm that successfully navigates the goal within 20 million timesteps. Second, the traditional intrinsic reward methods perform well on easy procedurally-generated environments but perform poorly on hardexploration tasks. For example, COUNT performs as well as RAPID on KeyCorridor-S3-R2 but fails on more challenging tasks, such as MultiRoom-N7-S8. This suggests that traditional methods may fall short in challenging procedurally-generated environments, which is consistent with the results in previous work (Raileanu & Rocktäschel, 2020). Third, RAPID outperforms SIL across all the tasks. Recall that both RAPID and SIL learn to reproduce past good experiences. While SIL reproduces experiences with high rewards, RAPID additionally considers local and global exploration scores. This verifies the effectiveness of the episodic exploration scores.
3.3 ABLATIONS AND ANALYSIS
To study RQ2, we perform ablation studies on RAPID. Specifically, we remove one of the local score, the global score, and the extrinsic reward, and compare each ablation with RAPID. Besides, we consider a variant that directly uses the episodic exploration score as the reward signal without buffer. To demonstrate the effectiveness of the ranking, we further include a baseline that uses the replay buffer without ranking. Table 1 shows the results. We observe that the local score contributes the most to the performance. While most existing exploration methods reward exploration globally, the global score could be not aware of generalization, which may explain the unsatisfactory performance of COUNT and CURIOSITY. While the global score seems to play a relatively small role as compared to local score in most MiniGrid environments, it plays an important role in KeyCorridorS4-R3. A possible reason is that the local score is a too weak signal in KeyCorridor-S4-R3, and the global score may help alleviate this issue since it provides some historical information. We also observe that the agent fails in almost all the tasks without buffer or ranking. This suggests that the ranking buffer is an effective way to exploit past good exploration behaviors.
To study RQ3, we plot the learning curves with different training steps S and buffer size D in Figure 5a and 5b. We observe that larger number of training steps usually leads to faster learning speed at the beginning stage but may result in sub-optimal performance. For the buffer size, we observe that a large buffer will lead to slow learning speed with more stable performance. We speculate that there is a trade-off between learning speed and the final performance. In our experiments, we set training steps S to be 5 and buffer size D to be 10000, respectively, across all the tasks.
To investigate RQ4, we remove extrinsic rewards and show the learning curves of RAPID and the baselines in Figure 5c with the corresponding local exploration scores in Figure 5d. We make two interesting observations. First, without extrinsic rewards, RAPID can deliver reasonable performance. For example, on MultiRoom-N12-S10, RAPID achieves nearly 0.5 average return with pure exploration, compared with around 0.6 if using extrinsic rewards. We look into the episodes generated by RAPID with pure exploration and observe that, in most cases, the agent is able to reach the goal but struggles to find the optimal path to the goal. This is expected because the extrinsic reward is obtained by whether the agent can reach the goal subtracting some penalties of the timesteps spent. The extrinsic reward can encourage the agent to choose the shortest path to reach the goal. For other baselines, we observe that only RIDE and AMIGO can occasionally reach the goal. Second, we observe that some baselines, such as RIDE, are indirectly maximizing the local exploration scores. For example, we observe that the local exploration score increases for RIDE on MultiRoom-N12S10. However, the local score of RAPID drops after about 10 million timesteps. This suggests the episode-level score may better align with good exploration behaviors. A possible explanation for the superiority of RAPID is that it is designed to directly optimize episodic exploration score.
To answer RQ5, we focus on MultiRoom-NX-SY and ambitiously vary X and Y to make the environments more challenging. Specifically, we fix Y to be 4 and vary X in the left-hand-side of Figure 6, and fix X to be 4 and vary Y in the middle of Figure 6. Compared with RIDE, our RAPID delivers stronger performance on more challenging mini-worlds.
3.4 MINIWORLD MAZE AND SPARSE MUJOCO RESULTS
For RQ6, the learning curves on 3D Maze are shown in the right-hand-side of Figure 6. RAPID performs significantly better than the baselines. We also conduct experiments on a subset of sparse MuJoCo tasks (Figure 7). The results show that RAPID improves over PPO and SIL. Interestingly, RAPID achieves more than 200 average return on Swimmer, which surpasses some previous results reported on the dense reward counterpart (Duan et al., 2016; Henderson et al., 2018; Lai et al., 2020). Note that the MuJoCo environments are singleton. We use these environments solely to test whether RAPID can deal with continuous state/action spaces. We have also tried adapting Swimmer to be procedurally-generated and observe similar results (Appendix I)
4 RELATED WORK
Count-Based and Curiosity-Driven Exploration. These two classes of intrinsic motivations are the most popular and proven to be effective in the literature. Count-based exploration encourages the agent to visit the novel states, which is first studied in tabular setting (Strehl & Littman, 2008) and has been recently extended to more complex state space (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Martin et al., 2017; Machado et al., 2018; Osband et al., 2019). Curiositydriven exploration learns the environment dynamics to encourage exploration (Stadie et al., 2015; Pathak et al., 2017; Sukhbaatar et al., 2018; Burda et al., 2018a). However, these methods are designed for singleton environments and often struggle to generalize to procedurally-generated environments (Raileanu & Rocktäschel, 2020).
Exploration for Procedurally-Generated Environments. Several recent studies have discussed the generalization of reinforcement learning (Rajeswaran et al., 2017; Zhang et al., 2018a;b; Choi et al., 2018) and designed procedurally-generated environments to test the generalization of reinforcement learning (Beattie et al., 2016; Nichol et al., 2018; Küttler et al., 2020). More recent papers show that traditional exploration methods fall short in procedurally-generated environments and address this issue with new exploration methods (Raileanu & Rocktäschel, 2020; Campero et al., 2020). This work studies a new perspective of exploration bonus in episode-level and achieves significantly better performance than the previous methods on procedurally-generated benchmarks.
Experience Replay Buffer and Self-Imitation Learning. Experience replay mechanism is widely used and is initially designed to stabilize deep reinforcement learning (Lin, 1992; 1993; Mnih et al., 2015; Zhang & Sutton, 2017; Zha et al., 2019b; Fedus et al., 2020). Self-Imitation Learning extends the experience replay buffer and uses it to store past good experiences (Oh et al., 2018; Guo et al., 2018; 2019; Gangwani et al., 2018). However, Self-Imitation only exploits highly rewarded experiences and thus is not suitable for very sparse environments. Our work incorporates local and global scores to encourage past good exploration behaviors in sparse procedurally-generated environments.
Episodic Memory. Inspired by the episodic memory of the animals, some studies propose to use episodic memory to enhance sample efficiency (Blundell et al., 2016; Pritzel et al., 2017). The key idea is to memorize and replay the good episodic experiences. More recently, episodic mem-
ory is used to generate intrinsic rewards to guide exploration (Savinov et al., 2018). Never Give Up (NGU) (Badia et al., 2019) further considers local and global novelties in the intrinsic rewards. Our work differs from NGU in that we approach the sample efficiency from a different angle by treating each episode as a whole and using a ranking mechanism to select good experiences. Thus, good experiences can be exploited multiple times to improve sample efficiency.
5 LIMITATIONS AND DISCUSSIONS
RAPID is designed to study our hypothesis that imitating episode-level good exploration behaviors can encourage exploration in procedurally-generated environments. In the presented algorithm, we have made some simple choices. In order to make RAPID more competitive to the state-of-the-art methods in more complicated environments, in this section, we highlight some limitations of RAPID and the potential research directions.
First, in our work, the local and global scores are mainly calculated on low-dimensional state space. We have not yet tested the scores on more complex procedurally-generated environments, such as ProcGen benchmark (Cobbe et al., 2019), TextWorld (Côté et al., 2018), and NetHack Learning Environment (Küttler et al., 2020), some of which require high-dimensional image inputs. In this context, count-based scores may not be well-suited. Learning the state embedding (Raileanu & Rocktäschel, 2020) and pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017) could be the potential ways to address this issue. Moreover, high extrinsic rewards may not necessarily mean good exploration behaviors in these environments. The applicability of RAPID in these environments needs future research efforts.
Second, the ranking buffer greedily selects the highly-reward state-action pairs, which is the key to boosting the performance. However, with this greedy strategy, some highly-ranked state-action pairs may never be forgotten even if they are out-of-date, which may introduce bias. As shown in Figure 5b, a larger buffer size will lead to more stable performance but will sacrifice the sample efficiency. In more complicated environments, selecting the proper buffer size could be non-trivial. Some forgetting mechanism (Novati & Koumoutsakos, 2019) and meta-learning strategies (Zha et al., 2019b) could be the potential directions to better manage the buffer.
Third, in our implementation, we employ behavior cloning, the simplest form of imitation learning, to train the policy. In recent years, many more advanced imitation learning methods have been proposed (Ho & Ermon, 2016; Hester et al., 2017). In particular, the idea of inverse reinforcement learning (Abbeel & Ng, 2004) is to learn reward functions based on the demonstrations. An interesting future direction is to study whether we can derive intrinsic rewards based on the experiences in the buffer via inverse reinforcement learning. Specifically, is there a connection between RAPID and intrinsic rewards from the view of inverse reinforcement learning? Can RAPID be combined with intrinsic rewards to improve the performance? We hope that RAPID can facilitate future research to understand these questions.
6 CONCLUSIONS AND FUTURE WORK
This work studies how we can reward good exploration behaviors in procedurally-generated environments. We hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors. To verify our hypothesis, we design RAPID, a practical algorithm that learns to reproduce past good exploration behaviors with a ranking buffer. Experiments on benchmark hard-exploration procedurally-generated environments show that RAPID significantly outperforms state-of-the-art intrinsic reward methods in terms of sample efficiency and final performance. Further analysis shows that RAPID has robust performance across different configurations, even without extrinsic rewards. Moreover, experiments on a 3D navigation task and a subset of sparse MuJoCo tasks show that RAPID could generalize to continuous state/action spaces, showing the promise in rewarding good episode-level exploration behaviors to encourage exploration. In the future, we will extend our idea and test it in more challenging procedurally-generated environments. We will also explore the possibility of combining the episode-level exploration bonus with intrinsic rewards. Last, we will try other imitation learning techniques beyond behavior cloning.
A HYPERPARAMETERS AND NEURAL NETWORK ARCHITECTURE
For the proposed RAPID, the common hyperparameters are summarized in Table 2. RAPID is based on the PPO implementation from OpenAI baselines1. The nstep is 128 for all MiniGrid environments and MuJoCo environments, and 512 for MiniWorld-Maze-S5. For MuJoCo environments, we report the results with only imitation loss since we find that the policy loss of PPO will harm the performance in the episodic reward setting. The learning rate is 10−4 for all MiniGrid environments and MiniWorld-Maze-S5, and 5× 10−4 for MuJoCo environments. Since MiniWorld-Maze-S5 and MuJoCo environments have continuous state space, we set w2 = 0. In practice, in MiniWorldMaze-S5, we observe that counting the states will usually lead to memory error because it is not likely to encounter the same state again in MiniWorld-Maze-S5 (COUNT is excluded in comparison due to memory issues). Note that it is possible to use pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017), which we will study in our future work. For MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0 and MiniWorld-Maze-S5, the update frequency of imitation learning is linearly annealed to 0 throughout the training process. For other environments, the imitation learning is performed after each episode. We keep all other hyperparameters of PPO as default and use the default CNN architecture provided in OpenAI baselines for MiniWorld-Maze-S5 and MLP with 64-64 for other environments. The PPO also uses the same hyperparameters. We summarize the state space of each environment and how the local and global scores are computed in Table 3.
For RANDOM, we implement the two networks with 64-64 MLP. For CURIOSITY and RIDE, we use 64-64 MLP for state embedding model, forward dynamics model and inverse dynamics model, respectively. However, with extensive hyperparameters search, we are not able to reproduce the RIDE results in MiniGrid environments even with the exactly same neural architecture and hyperparameters as suggested in (Raileanu & Rocktäschel, 2020). Thus, we use the authors’ implementation2 (Raileanu & Rocktäschel, 2020), which is well-tuned on MiniGrid tasks. Our reported result
1https://github.com/openai/baselines 2https://github.com/facebookresearch/impact-driven-exploration
is on par with that reported in (Raileanu & Rocktäschel, 2020). For SIL and AMIGO, we use the implementations from the authors34 with the recommended hyperparameters.
For the methods based on intrinsic rewards, i.e., RIDE,, AMIGO CURIOSITY, RANDOM, and COUNT, we search intrinsic reward coefficient from {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}. For RIDE, the intrinsic reward coefficient is 0.1 for MiniGrid-KeyCorridorS2R3-v0, MiniGridKeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.5 for MiniGrid-MultiRoomN7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGrid-MultiRoom-N12S10-v0. For AMIGO and CURIOSITY, the intrinsic reward coefficient is 0.1 for all the environments. For COUNT, the intrinsic reward coefficient is 0.005 for all the environments. For RANDOM, the intrinsic reward coefficient is 0.001 for all the environemnts. We also tune the entropy coefficient of RIDE from {0.001, 0.0005}, as suggested by the original paper. The entropy coefficient is 0.0005 for MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.001 for MiniGrid-MultiRoom-N7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGridMultiRoom-N12S10-v0.
The w0, w1, and w2 are selected as follows. We first run RAPID on MiniGrid-MultiRoom-N7S8v0. We choose this environment because it is not too hard nor too easy, so we expect that it is representative among all the considered MiniGrid environments. We fix w0 to be 1, and select w1 and w2 from {10−4, 10−3, 10−2, 10−1, 100, 101, 102, 103, 104}, that is, there are 9 × 9 = 81 combinations. Note that we can fix w0 because only the relative ranking matters in the ranking buffer. The selected w1 and w2 are then directly used in other environments without tuning. We further conduct a sensitivity analysis on MiniGrid-MultiRoom-N7S8-v0 in Figure 8. We observe that RAPID has good performance in a very wide range of hyperparameter choices.
B ENVIRONMENTS
B.1 MINIGRID ENVIRONMENTS
MiniGrid5 is a suite of light-weighted and fast gridworld environments with OpenAI gym interfaces. The environments are partially observable with a grid size of N ×N . Each tile in the gird can have at most one object, that is, 0 or 1 object. The possible objects are wall, floor, lava, door, key, ball, box and goal. Each object will have an associated color. The agent can pick up at most one object (ball or key). To open a locked door, the agent must use a key. Most of the environments in
3https://github.com/junhyukoh/self-imitation-learning 4https://github.com/facebookresearch/adversarially-motivated-intrinsic-goals 5https://github.com/maximecb/gym-minigrid
MiniGrid are procedurally-generated, i.e. a different grid will be sampled in each episode. Figure 9 shows the grids in four different episodes. The procedurally-generated nature makes training RL challenging because the RL agents need to learn the skills that can generalize to different grids. The environments can be easily modified so that we can create different levels of the environments. For example, we can modify the number of rooms and the size of each room to create hard-exploration or easy-exploration environments. The flexibility of MiniGrid environments enables us to conduct systematic comparison of algorithms under different difficulties.
The original observations of MiniGrid are dictionaries, which consist of a image field providing a partially observable view of the environment, and a mission field describing the goal with texts. In our experiments, we use ImgObsWrapper which only keeps the image field. The environment uses a compact encoding with 3 input values per visible grid cell. Note that the cells behind the wall or unopened doors are invisible.
The possible actions in MiniGrid are (1) turn left, (2) turn right, (3) move forward, (4) pick up an object, (5) drop the carried object, (6) open doors/interact with objects, and (7) done. The agent will remain in the same state if the action is not legal. For example, if there is no object in front of the agent, the agent will remain in the same state when performing action (4) pick up an object.
We focus on MultiRoom-NX-SY and KeyCorridor-SX-RY, where X and Y are hyperparameters specifying the number of rooms or the room sizes in the environments. With larger X and Y, the environments will become more difficult to solve due to the sparse rewards. Figure 10 shows the environments (with different difficulties) in this work. In MultiRoom-NX-SY, the agent needs to navigate the green tile in the last room. If the agent successfully navigates the goal, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent. In KeyCorridor-SX-RY, the agent needs to pick up the key, use the key to open the locked door, and pick up the ball. Similarly, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent if it picks up the ball. Both environments are extremely difficult to solve using RL alone due to the sparse rewards.
B.2 MINIWORLD MAZE ENVIRONMENT
MiniWorld6 is a minimalistic 3D interior environment simulator. Similar to Minigrid, we can easily create environments with different levels. We focus on MiniWorld-Maze, where the agent is asked to navigate the goal through a procedurally-generated maze. The agent has a first-person partially observable view of the environment. This environment is extremely difficult to solve due to sparse reward, long-horizon, and the procedurally-generated nature of the environments.
6https://github.com/maximecb/gym-miniworld
In this work, we focus on MiniWorld-Maze-S5, a variant of the MiniWorld Maze environment with 5×5 tiles. Figure 11 shows the top view of the mazes in four different episodes. In each episode, the agent and the goal are initialized in the top-left and bottom-right corners of the map, respectively. In this way, we ensure that the agent and the goal are far away enough so that the agent can not easily see the goal without exploration. A random maze will be generated in each episode. There are three possible actions: (1) move forward, (2) turn left, and (3) turn right. The agent will move 0.4× tile length if moving forward, where tile length is the length of one side of a tile. The agent will rotate 22.5 degrees if turning right/left. The time budget of each episode is 600. The agent will not receive any positive reward if it can not navigate the goal under the time budget.
B.3 SPARSE MUJOCO ENVIRONMENTS
Mujoco is a physical engine for continuous control (Todorov et al., 2012). Figure 12 shows the rendering of the four MuJoCo tasks used in our work. The original MuJoCo environments use dense rewards, i.e., a well-defined reward is given in each timestep. However, in many scenarios, well-defined dense rewards may be not available. We consider a variant of MuJoCo task using only episodic reward. Specifically, the original rewards are accumulated and only an episodic reward is given in the final timestep. The agent will receive a 0 reward in all the other timesteps. This sparse setting makes the tasks much more challenging to solve. Contemporary RL algorithms will usually suffer from the sparse rewards and deliver unsatisfactory performance. We aim to use these sparse variants to study whether our RAPID can effectively capture the sparse rewards.
C ABLATIONS
Figure 13 shows the learning curves of RAPID against different ablations. Without local scores, we observe significant performance drop on challenging environments in terms of sample efficiency and final performance, i.e., on MultiRoom-N7-S8, MultiRoom-N10-S10, MultiRoom-N12-S10, KeyCorridor-S3-R3, and KeyCorridor-S4-R3. This suggests that the proposed local score plays an important role in encouraging exploration. We also observe that without buffer, the agent fails to learn a good policy. Naively using the proposed scores as intrinsic reward will harm the performance. Using the extrinsic reward and the global score to rank the episodes also contribute in the challenging environments, such as KeyCorridor-S4-R3. Therefore, the three proposed scoring methods may be complementary and play different roles in different environments.
D FULL ANALYSIS RESULTS
D.1 IMPACT OF THE NUMBER OF UPDATE STEPS
D.2 IMPACT OF BUFFER SIZE
D.3 PURE EXPLORATION ON MINIGRID
D.4 LOCAL EXPLORATION SCORE OF PURE EXPLORATION ON MINIGRID
F LEARNING CURVES WITH LARGE ROOMS
E LEARNING CURVES WITH MORE ROOMS
G ABLATION AND ANALYSIS OF MINIWORLD-MAZE
H COMPARISON WITH STORING ENTIRE EPISODES IN THE BUFFER
A variant of RAPID is to keep the entire episodes in the buffer instead of state-action pairs. The intuition of keeping the whole episode is that it is possible that a particular state-action pair may only be good in terms of exploration, in the context of the rest of the agent’s trajectory in that episode. To test this variant, we force the buffer to keep the entire episode by allowing the episode at the end of the buffer to exceed the buffer size. For example, given that the buffer size is 5000, if the length of the end episode is 160 and the number of state-action pairs in the buffer is 5120, we do not cut off the end episode and exceptionally allow the buffer to store 5120 state-action pairs. In this way, we ensure that the entire episodes are kept in the buffer. We run this experiment on MiniGrid-MultiRoom-N7-S8-v0 (Figure 25). We do not observe a clear difference in the learning curves.
I RESULTS ON PROCEDURALLY-GENERATED SWIMMER
The standard MuJoCo environments are singleton. Here, we modify the Swimmer-v2 environment to make it procedurally-generated. Specifically, we make the environment in each episode different by modifying the XML configuration file of MuJoCo engine in every new episode. We consider two environmental properties, i.e., density and viscosity. Density is used to simulate lift and drag forces, which scale quadratically with velocity. Viscosity is used to simulate viscous forces, which scale linearly with velocity. In the standard Swimmer-v2 environment, the density and the viscosity are fixed to 4000 and 0.1, respectively. In the first experiment, we uniformly sample the density in [2000, 4000] in each new episode to make it procedurally-generated, denoted as Density-Swimmer. Similarly, in the second environment, we uniformly sample the velocity in [0.1, 0.5], denoted as Velocity-Swimmer. These two variants make RL training more challenging since the agent needs to learn a policy that can generalize to different densities and velocities. Similarly, we accumulate the rewards to the final timestep of an episode to make the environments sparse. Both environments are provided in our code for reproducibility. The results are reported in Figure 26. We observe that all methods learn slower on these two variants. For example, RAPID only achieves around 190 return in Density-Swimmer, while RAPID achieves more than 200 return in Swimmer. Similarly, SIL reaches around 100 return after around 5×106 timesteps in Density-Swimmer, while it achieves around 100 return after around 3 × 106 timesteps in Swimmer. Nevertheless, RAPID can discover very good policies even in these challenging procedurally-generated settings.
J DISCUSSIONS OF ANNEALING
In our experiments, we find that annealing the updating step to 0 sometimes helps improve the sample efficiency. However, in some environments, annealing will have a negative effect on the final performance. To show the effect of annealing, we plot learning curves with or without annealing on MiniGrid-KeyCorridorS3R2-v0 and MiniGrid-MultiRoom-N12-S10-v0 in Figure 27. We observe that annealing can help in KeyCorridor-S3-R2. The possible explanation is that, after the agent navigates the goal, it has access to the reward signal. the PPO objective is sufficient to train a good policy with the extrinsic rewards. The imitation loss may harm the performance since it could be better to perform exploitation rather than exploration at this stage. However, in MultiRoom-N12S10, annealing will harm the performance. In particular, the performance drops significantly in the final stage when the updating frequency of imitation loss approaches zero. The possible reason is that MultiRoom-N12-S10 needs very deep exploration to find the goal. Even though the PPO agent can navigate the goal, it may easily suffer from insufficient exploration when it encounters a new environment. As a result, the agent needs to keep exploring the environment in order to generalize. Introducing imitation loss will ensure that the agent keeps exploring the environments. | 1. What is the main contribution of the paper regarding improving exploration in deep RL?
2. What are the strengths of the proposed approach, particularly in its simplicity and empirical evaluation?
3. Are there any concerns or questions regarding the comparison with other approaches, such as Never Give Up?
4. Do you have any questions about the storage of state-action pairs in the replay buffer?
5. How do you assess the limitation of the continuous control experiments for MuJoCo locomotion tasks?
6. Can the local and global scores be applied to observations directly?
7. How were the weights chosen for the total score, and what happens when the weight for the global score is higher?
8. Why is it necessary to anneal the imitation learning to zero for some environments, but not others?
9. What is the state space for each task used to compute the local and global scores? | Review | Review
Summary: This paper tackles the problem of improving exploration in deep RL for procedurally-generated environments, where state-of-the-art exploration techniques typically fail. In the proposed approach, called RAPID, each agent-generated episode is evaluated with respect to its local exploration score (for the given episode), global exploration score (across all previous episodes), and extrinsic reward obtained. Episodes with high scores are stored in a replay buffer, and a policy is trained via behavioral cloning on batches of state-action pairs from this buffer. This policy is also used to produce the agent-generated episodes.
Recommendation: The approach is simple and intuitive, and empirically improves exploration in procedurally-generated environments. Training on procedurally-generated environments is becoming more common and useful, for instance in domain randomization for sim-to-real transfer in robotics, so this approach would be relevant for the ICLR audience. I have some concerns and questions (detailed below), but overall I recommend acceptance.
Pros:
The approach itself is simple and easy to implement.
The paper is clearly writtten and well-motivated.
The empirical evaluation is thorough: RAPID is compared against a suite of state-of-the-art baselines for exploration bonuses, exploration in procedurally-generated environments, and a self-imitation approach; there are also ablation studies and hyperparameter sensitivity studies. The ablation study comparing behavioral cloning versus exploration bonuses (where the per-episode exploration scores are given as part of the reward to the agent) is particularly interesting, given that existing approaches typically rely on the latter.
The empirical evaluation is on a variety of domains, including both discrete-action and continuous control tasks.
Cons:
I would like to see an empirical comparison against Never Give Up (NGU; Badia et al. 2020), which also uses episodic novelty and global novelty to guide exploration. Although NGU uses the same environment for training and testing, because it takes into account how controllable a state is, it wouldn't suffer from the limitation highlighted in Figure 1 (where a state is randomly generated, regardless of the agent's action), and may do well in procedurally-generated environments.
It seems strange to me that single state-action pairs are stored in the replay buffer, rather than keeping all state-action pairs from an entire episode together. It's possible that a particular state-action pair may only be good in terms of exploration, in the context of the rest of the agent's trajectory in that episode.
The continuous control experiments for MuJoCo locomotion tasks are contrived, since the extrinsic reward for forward progress is summed and given at the end of the episode, which just doesn't make sense for locomotion tasks, and unnecessarily hampers the baseline approaches. Using a goal-reaching continuous control task would be more relevant: e.g., Swimmer in a procedurally-generated maze.
I would like to see a discussion of limitations and failure cases for RAPID.
There are minor grammatical errors and typos throughout the paper.
Questions:
Can the local score and global score be applied to observations (e.g., image observations) directly, instead of states?
Are the scores associated with state-action pairs in the replay buffer taken into account during sampling of batches, or ignored?
How were the weights chosen for the total score, that is a weighted sum of the extrinsic reward, local score, and global score? The weight for the global score is very small compared to the others (i.e., 0.001 versus 0.1 and 1). Why is this the case? What happens when this weight is higher (e.g., is there some type of suboptimal behavior that emerges)?
Why is it necessary to anneal the imitation learning to zero for some environments, but not others?
What is the state space for each of the tasks, that's used to compute the local and global scores? It would be useful to have a table in the Appendix that summarizes this. |
ICLR | Title
Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Abstract
Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid.
1 INTRODUCTION
Deep reinforcement learning (RL) is widely applied in various domains (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Lillicrap et al., 2015; Andrychowicz et al., 2017; Zha et al., 2019a; Liu et al., 2020). However, RL algorithms often require tremendous number of samples to achieve reasonable performance (Hessel et al., 2018). This sample efficiency issue becomes more pronounced in sparse reward environments, where the agent may take an extremely long time before bumping into a reward signal (Riedmiller et al., 2018). Thus, how to efficiently explore the environment under sparse reward remains an open challenge (Osband et al., 2019).
To address the above challenge, many exploration methods have been investigated and demonstrated to be effective on hard-exploration environments. One of the most popular techniques is to use intrinsic rewards to encourage exploration (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Guo et al., 2016; Zheng et al., 2018; Du et al., 2019). The key idea is to give intrinsic bonus based on uncertainty, e.g., assigning higher rewards to novel states (Ostrovski et al., 2017), or using the prediction error of a dynamic model as the intrinsic reward (Pathak et al., 2017). While many intrinsic reward methods have demonstrated superior performance on hard-exploration environments, such as Montezuma’s Revenge (Burda et al., 2018b) and Pitfall! (Badia et al., 2019), most of the previous studies use the same singleton environment for training and testing, i.e., the agent aims to solve the same environment in each episode. However, recent studies show that the agent trained in this way is susceptible to overfitting and may fail to generalize to even a slightly different environment (Rajeswaran et al., 2017; Zhang et al., 2018a). To deal with this issue, a few procedually-generated environments
are designed to test the generalization of RL, such as (Beattie et al., 2016; Chevalier-Boisvert et al., 2018; Nichol et al., 2018; Côté et al., 2018; Cobbe et al., 2019; Küttler et al., 2020), in which the agent aims to solve the same task, but a different environment is generated in each episode.
Unfortunately, encouraging exploration in procedually-generated environments is a very challenging task. Many intrinsic reward methods, such as count-based exploration and curiosity-driven exploration, often fall short in procedually-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020). Figure 1a shows a motivating example of why count-based exploration is less effective in procedually-generated environments. We consider a 1D grid-world, where the agent (red) needs to explore the environment to reach the goal within 7 timesteps, that is, the agent needs to move right in all the steps. While count-based exploration assigns reasonable intrinsic rewards in singleton setting, it may generate misleading intrinsic rewards in procedually-generated setting because visiting a novel state does not necessarily mean a good exploration behavior. This issue becomes more pronounced in more challenging procedually-generated environments, where the agent is not likely to visit the same state more than once. To tackle the above challenge, this work studies how to reward good exploration behaviors that can generalize to different environments.
When a human judges how well an agent explores the environment, she often views the agent from the episode-level instead of the state-level. For instance, one can easily tell whether an agent has explored a maze well by looking into the coverage rate of the current episode, even if the current environment is different from the previous ones. In Figure 1b, we can confidently tell that the episode in the right-hand-side is a good exploration behavior because the agent achieves 0.875 coverage rate. Similarly, we can safely conclude that the episode in the left-hand-side does not explore the environment well due to its low coverage rate. Motivated by this, we hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors than state-level intrinsic rewards in procedually-generated setting.
To verify our hypothesis, we propose exploration via Ranking the Episodes (RAPID). Specifically, we identify good exploration behaviors by assigning episodic exploration scores to past episodes. To efficiently learn the past good exploration behaviors, we use a ranking buffer to store the highlyscored episodes. The agent then learns to reproduce the past good exploration behaviors with imitation learning. We demonstrate that RAPID significantly outperforms the state-of-the-art intrinsic reward methods on several procedually-generated benchmarks. Moreover, we present extensive ablations and analysis for RAPID, showing that RAPID can well explore the environment even without extrinsic rewards and could be generally applied in tasks with continuous state/action spaces.
2 EXPLORATION VIA RANKING THE EPISODES
An overview of RAPID is shown in Figure 2. The key idea is to assign episodic exploration scores to past episodes and store those highly-scored episodes into a small buffer. The agent then treats the episodes in the buffer as demonstrations and learns to reproduce these episodes with imitation learning. RAPID encourages the agent to explore the environment by reproducing the past good
exploration behaviors. This section introduces how we define the episodic exploration score for procedurally-generated environments and how the proposed method can be combined with state-ofthe-art reinforcement learning agents.
2.1 EPISODIC EXPLORATION SCORE
Each episode is scored from both local and global perspectives. The local score is a per-episode view of the exploration behavior, which measures how well the current episode itself explores the environment. The global score provides a long-term and historical view of exploration, i.e., whether the current episode has explored the regions that are not well explored in the past. Additionally, we consider extrinsic reward, which is the episodic reward received from the environment. It is an external criterion of exploration since a high extrinsic reward often suggests good exploration in sparse environments. The overall episodic exploration score is obtained by the weighted sum of the above three scores. We expect that these scores can model different aspects of exploration behaviors and they are complementary in procedurally-generated environments.
Local Score. The intuition of the local exploration bonus is that the agent is expected to visit as many distinct states as possible in one episode. In our preliminary experiments on MiniGrid, we observe that many poor exploration episodes repeatedly visit the same state. As a result, the agent may get stuck in a well-known region but never reach out to unexplored regions. From a local view, we can quantify this exploration behavior based on the diversity of the visited states. Specifically, we define the local exploration score as
Slocal = Ndistinct Ntotal , (1)
where Ntotal is the total number of states in the episode, and Ndistinct is the number of distinct states in the episode. Intuitively, optimizing the local score will encourage the agent to reach out to the unexplored regions in the current episode. In an ideal case, the episodes that never visit the same state twice will receive a local bonus of 1. There could be various ways to extend the above definition to continuous state space. In this work, we empirically define the local score in continuous state space as the mean standard deviation of all the states in the episode:
Slocal =
∑l i=1 std(si)
l , (2)
where l is the dimension of the state, and std(si) is the standard deviation along the i-dimension of the states in the episode. Intuitively, this score encourages the episodes that visit diverse states.
Global Score. The global score is designed to model long-term exploration behavior. While a different environment will be generated in each episode in the procedurally-generated setting, the environments in different episodes usually share some common characteristics. For example, in the
Algorithm 1 Exploration via Ranking the Episodes (RAPID) 1: Input: Training steps S, buffer size D, RL rollout steps T 2: Initialize the policy πθ, replay buffer D 3: for iteration = 1, 2, ... until convergence do 4: Execute πθ for T timesteps 5: Update πθ with RL objective (also update value functions if any) 6: for each generated episode τ do 7: Compute episodic exploration score Sτ based on Eq. (4) 8: Give score Sτ to all the state-action pairs in τ and store them to the buffer 9: Rank the state-action pairs in D based on their exploration scores 10: if D.length > D then 11: Discard the state-action pairs with low scores so that D.length = D 12: end if 13: for step = 1, 2, .., S do 14: Sample a batch from D and train πθ using the data in D with behavior cloning 15: end for 16: end for 17: end for
MiniGrid MultiRoom environment (Figure 3a), although a different maze will be generated in a new episode, the basic building blocks (e.g., walls, doors, etc.), the number of rooms, and the size of each room will be similar. From a global view, the agent should explore the regions that are not well explored in the past. Motivated by count-based exploration, we define the global score as
Sglobal = 1
Ntotal ∑ s 1√ N(s) , (3)
where N(s) is the state count of s throughout the training process, that is, the global score is the mean count-based intrinsic reward of all the states in the episode.
Episodic Exploration Score. Suppose Sext is the total extrinsic reward of an episode received from the environment . The episodic exploration score is obtained by the weighted sum of the extrinsic reward Sext, the local score Slocal, and the global score Sglobal:
S = w0Sext + w1Slocal + w2Sglobal, (4) where w0, w1 and w2 are hyperparameters. We speculate that the three scores are complementary and the optimal w0, w1 and w2 could vary in different environments. For example, a higher w1 will give a higher weight to the per-episode view of exploration behavior. A higher w2 will focus more on whether the current episode has explored the past unexplored regions. Nevertheless, we find that a single set of w0, w1 and w2 works well across many tasks.
Comparison with Prior Work. Several papers have studied episode-level feedback to improve the sample efficiency of RL. To the best of our knowledge, the existing work mainly focuses on learning a meta-policy based on episodic rewards (Xu et al., 2018; Zha et al., 2019b), or imitating the episodes with high episodic rewards (Guo et al., 2018; Srivastava et al., 2019). However, in very sparse environments, it is not likely to receive a non-zero reward without exploration. These methods are not suitable for very sparse environments since episodic rewards could be always zero. In this work, we propose local and global episodic scores to quantify the exploration behaviors and encourage the agent to visit distinct states from per-episode and long-term views.
2.2 REPRODUCING PAST GOOD EXPLORATION BEHAVIORS WITH IMITATION LEARNING
A straightforward idea is to treat the episodic exploration score as a reward signal and assign it to the last timestamp of the episode. In this way, the agent can be updated with any reinforcement learning objective. However, this strategy is not very effective with three potential limitations. First, the reward remains sparse so that the reinforcement learning agents may not be able to effectively exploit the reward signals. Second, the intrinsic exploration bonus may introduce bias in the objective. While the local score and global score may encourage exploration at the beginning stage, they are
not true objectives and may degrade the final performance. Third, it only uses a good episode once. In fact, many past good episodes can be exploited many times to improve sample efficiency.
To overcome the above limitations, we propose to use a small ranking buffer to store past good state-action pairs and then use imitation objective to enhance exploration. Specifically, we assign the same episodic exploration score to all the state-action pairs in an episode and rank all the stateaction pairs based on scores. In addition to the original RL objective, we employ behavior cloning, the simplest form of imitation learning, to encourage the agent to reproduce the state-action pairs in the buffer. This simple ranking buffer can effectively exploit the past good experiences since a good state-action pair could be sampled more than once. The procedure is summarized in Algorithm 1. A possible variant is to keep the entire episodes in the buffer. We have tried this idea and do not observe a clear difference between this variant and Algorithm 1 (see Appendix H for more discussions).
3 EXPERIMENTS
The experiments are designed to answer the following research questions: RQ1: how is RAPID compared with the state-of-the-art intrinsic reward methods on benchmark procedurally-generated environments (Section 3.2)? RQ2: how will RAPID perform if removing one of the scores (i.e., the local score, the global score, and the extrinsic reward) or the buffer (Section 3.3)? RQ3: how will the hyperparameters impact the performance of RAPID (Section 3.3)? RQ4: can RAPID explore the environment without extrinsic reward (Section 3.3)? RQ5: how will RAPID perform with larger and more challenging grid-world (Section 3.3)? RQ6: can RAPID generalize to 3D navigation task and continuous state/action space, such as MuJoCo (Section 3.4)?
3.1 ENVIRONMENTS AND BASELINES
Figure 3 shows the rendering of procedurally-generated environments used in this work. Following the previous studies of exploration in procedurally-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020), we mainly focus on MiniGrid environments (ChevalierBoisvert et al., 2018), which provide procedurally-generated grid-worlds. We consider two types of hard exploration tasks: MultiRoom-NX-SY, where X and Y indicate the number of rooms and the room size, respectively, and KeyCorridor-SX-RY, where X and Y indicate the room size and the number of rows, respectively. A large X or Y suggests a larger environment, which will be more difficult to explore. The agent has a partial view of the environment and needs to navigate the green block or the red ball under sparse rewards. These tasks are challenging because the agent needs to explore many rooms in order to receive a reward signal so that standard RL algorithms will typically fail. To test whether RAPID can generalize to continuous state space, we consider MiniWorld Maze (Chevalier-Boisvert, 2018), whose observation space is 60 × 80 images. Similarly, the agent in MiniWorld Maze has a 3D partial view of the environment and needs to navigate the red box. The maze is procedurally-generated, i.e., a different maze will be generated in each episode. We further study RAPID in continuous state and action spaces on sparse MuJoCo tasks (Todorov et al., 2012; Brockman et al., 2016). More details of the above environments are described in Appendx B.
While exploration has been extensively studied in the literature, very few methods are designed for procedurally-generated environments. To better understand the performance of RAPID, we consider the baselines in the following categories. Traditional Methods: we consider several popular methods designed for singleton environments, including count-based exploration (COUNT) (Bellemare et al., 2016), Random Network Distillation Exploration (RANDOM) (Burda et al., 2018b),
.
and curiosity-driven exploration (CURIOSITY) (Pathak et al., 2017). Methods for ProcedurallyGenerated Environments: we consider two state-of-the-art exploration methods, including ImpactDriven Exploration (RIDE) (Raileanu & Rocktäschel, 2020), and exploration with Adversarially Motivated Intrinsic Goals (AMIGO) (Campero et al., 2020). Self-Imitation Methods: self-imitation learning (SIL) (Oh et al., 2018) also exploits past good experiences to encourage exploration. We further include PPO (Schulman et al., 2017) as a reinforcement learning baseline. All the algorithms are run the same number timesteps for a fair comparison. More details are provided in Appendix A.
3.2 MINIGRID RESULTS
To study RQ1, we plot the learning curves of RAPID and the baselines on MiniGrid benchmarks in Figure 4. We can make three observations. First, RAPID performs significantly better than the state-of-the-art methods designed for singleton or procedurally-generated environments in terms of sample efficiency and final performance. For example, on MultiRoom-N7-S8, RAPID is at least 10 times more sample efficient than RIDE. On KeyCorridor-S4-R3, RAPID is the only algorithm that successfully navigates the goal within 20 million timesteps. Second, the traditional intrinsic reward methods perform well on easy procedurally-generated environments but perform poorly on hardexploration tasks. For example, COUNT performs as well as RAPID on KeyCorridor-S3-R2 but fails on more challenging tasks, such as MultiRoom-N7-S8. This suggests that traditional methods may fall short in challenging procedurally-generated environments, which is consistent with the results in previous work (Raileanu & Rocktäschel, 2020). Third, RAPID outperforms SIL across all the tasks. Recall that both RAPID and SIL learn to reproduce past good experiences. While SIL reproduces experiences with high rewards, RAPID additionally considers local and global exploration scores. This verifies the effectiveness of the episodic exploration scores.
3.3 ABLATIONS AND ANALYSIS
To study RQ2, we perform ablation studies on RAPID. Specifically, we remove one of the local score, the global score, and the extrinsic reward, and compare each ablation with RAPID. Besides, we consider a variant that directly uses the episodic exploration score as the reward signal without buffer. To demonstrate the effectiveness of the ranking, we further include a baseline that uses the replay buffer without ranking. Table 1 shows the results. We observe that the local score contributes the most to the performance. While most existing exploration methods reward exploration globally, the global score could be not aware of generalization, which may explain the unsatisfactory performance of COUNT and CURIOSITY. While the global score seems to play a relatively small role as compared to local score in most MiniGrid environments, it plays an important role in KeyCorridorS4-R3. A possible reason is that the local score is a too weak signal in KeyCorridor-S4-R3, and the global score may help alleviate this issue since it provides some historical information. We also observe that the agent fails in almost all the tasks without buffer or ranking. This suggests that the ranking buffer is an effective way to exploit past good exploration behaviors.
To study RQ3, we plot the learning curves with different training steps S and buffer size D in Figure 5a and 5b. We observe that larger number of training steps usually leads to faster learning speed at the beginning stage but may result in sub-optimal performance. For the buffer size, we observe that a large buffer will lead to slow learning speed with more stable performance. We speculate that there is a trade-off between learning speed and the final performance. In our experiments, we set training steps S to be 5 and buffer size D to be 10000, respectively, across all the tasks.
To investigate RQ4, we remove extrinsic rewards and show the learning curves of RAPID and the baselines in Figure 5c with the corresponding local exploration scores in Figure 5d. We make two interesting observations. First, without extrinsic rewards, RAPID can deliver reasonable performance. For example, on MultiRoom-N12-S10, RAPID achieves nearly 0.5 average return with pure exploration, compared with around 0.6 if using extrinsic rewards. We look into the episodes generated by RAPID with pure exploration and observe that, in most cases, the agent is able to reach the goal but struggles to find the optimal path to the goal. This is expected because the extrinsic reward is obtained by whether the agent can reach the goal subtracting some penalties of the timesteps spent. The extrinsic reward can encourage the agent to choose the shortest path to reach the goal. For other baselines, we observe that only RIDE and AMIGO can occasionally reach the goal. Second, we observe that some baselines, such as RIDE, are indirectly maximizing the local exploration scores. For example, we observe that the local exploration score increases for RIDE on MultiRoom-N12S10. However, the local score of RAPID drops after about 10 million timesteps. This suggests the episode-level score may better align with good exploration behaviors. A possible explanation for the superiority of RAPID is that it is designed to directly optimize episodic exploration score.
To answer RQ5, we focus on MultiRoom-NX-SY and ambitiously vary X and Y to make the environments more challenging. Specifically, we fix Y to be 4 and vary X in the left-hand-side of Figure 6, and fix X to be 4 and vary Y in the middle of Figure 6. Compared with RIDE, our RAPID delivers stronger performance on more challenging mini-worlds.
3.4 MINIWORLD MAZE AND SPARSE MUJOCO RESULTS
For RQ6, the learning curves on 3D Maze are shown in the right-hand-side of Figure 6. RAPID performs significantly better than the baselines. We also conduct experiments on a subset of sparse MuJoCo tasks (Figure 7). The results show that RAPID improves over PPO and SIL. Interestingly, RAPID achieves more than 200 average return on Swimmer, which surpasses some previous results reported on the dense reward counterpart (Duan et al., 2016; Henderson et al., 2018; Lai et al., 2020). Note that the MuJoCo environments are singleton. We use these environments solely to test whether RAPID can deal with continuous state/action spaces. We have also tried adapting Swimmer to be procedurally-generated and observe similar results (Appendix I)
4 RELATED WORK
Count-Based and Curiosity-Driven Exploration. These two classes of intrinsic motivations are the most popular and proven to be effective in the literature. Count-based exploration encourages the agent to visit the novel states, which is first studied in tabular setting (Strehl & Littman, 2008) and has been recently extended to more complex state space (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Martin et al., 2017; Machado et al., 2018; Osband et al., 2019). Curiositydriven exploration learns the environment dynamics to encourage exploration (Stadie et al., 2015; Pathak et al., 2017; Sukhbaatar et al., 2018; Burda et al., 2018a). However, these methods are designed for singleton environments and often struggle to generalize to procedurally-generated environments (Raileanu & Rocktäschel, 2020).
Exploration for Procedurally-Generated Environments. Several recent studies have discussed the generalization of reinforcement learning (Rajeswaran et al., 2017; Zhang et al., 2018a;b; Choi et al., 2018) and designed procedurally-generated environments to test the generalization of reinforcement learning (Beattie et al., 2016; Nichol et al., 2018; Küttler et al., 2020). More recent papers show that traditional exploration methods fall short in procedurally-generated environments and address this issue with new exploration methods (Raileanu & Rocktäschel, 2020; Campero et al., 2020). This work studies a new perspective of exploration bonus in episode-level and achieves significantly better performance than the previous methods on procedurally-generated benchmarks.
Experience Replay Buffer and Self-Imitation Learning. Experience replay mechanism is widely used and is initially designed to stabilize deep reinforcement learning (Lin, 1992; 1993; Mnih et al., 2015; Zhang & Sutton, 2017; Zha et al., 2019b; Fedus et al., 2020). Self-Imitation Learning extends the experience replay buffer and uses it to store past good experiences (Oh et al., 2018; Guo et al., 2018; 2019; Gangwani et al., 2018). However, Self-Imitation only exploits highly rewarded experiences and thus is not suitable for very sparse environments. Our work incorporates local and global scores to encourage past good exploration behaviors in sparse procedurally-generated environments.
Episodic Memory. Inspired by the episodic memory of the animals, some studies propose to use episodic memory to enhance sample efficiency (Blundell et al., 2016; Pritzel et al., 2017). The key idea is to memorize and replay the good episodic experiences. More recently, episodic mem-
ory is used to generate intrinsic rewards to guide exploration (Savinov et al., 2018). Never Give Up (NGU) (Badia et al., 2019) further considers local and global novelties in the intrinsic rewards. Our work differs from NGU in that we approach the sample efficiency from a different angle by treating each episode as a whole and using a ranking mechanism to select good experiences. Thus, good experiences can be exploited multiple times to improve sample efficiency.
5 LIMITATIONS AND DISCUSSIONS
RAPID is designed to study our hypothesis that imitating episode-level good exploration behaviors can encourage exploration in procedurally-generated environments. In the presented algorithm, we have made some simple choices. In order to make RAPID more competitive to the state-of-the-art methods in more complicated environments, in this section, we highlight some limitations of RAPID and the potential research directions.
First, in our work, the local and global scores are mainly calculated on low-dimensional state space. We have not yet tested the scores on more complex procedurally-generated environments, such as ProcGen benchmark (Cobbe et al., 2019), TextWorld (Côté et al., 2018), and NetHack Learning Environment (Küttler et al., 2020), some of which require high-dimensional image inputs. In this context, count-based scores may not be well-suited. Learning the state embedding (Raileanu & Rocktäschel, 2020) and pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017) could be the potential ways to address this issue. Moreover, high extrinsic rewards may not necessarily mean good exploration behaviors in these environments. The applicability of RAPID in these environments needs future research efforts.
Second, the ranking buffer greedily selects the highly-reward state-action pairs, which is the key to boosting the performance. However, with this greedy strategy, some highly-ranked state-action pairs may never be forgotten even if they are out-of-date, which may introduce bias. As shown in Figure 5b, a larger buffer size will lead to more stable performance but will sacrifice the sample efficiency. In more complicated environments, selecting the proper buffer size could be non-trivial. Some forgetting mechanism (Novati & Koumoutsakos, 2019) and meta-learning strategies (Zha et al., 2019b) could be the potential directions to better manage the buffer.
Third, in our implementation, we employ behavior cloning, the simplest form of imitation learning, to train the policy. In recent years, many more advanced imitation learning methods have been proposed (Ho & Ermon, 2016; Hester et al., 2017). In particular, the idea of inverse reinforcement learning (Abbeel & Ng, 2004) is to learn reward functions based on the demonstrations. An interesting future direction is to study whether we can derive intrinsic rewards based on the experiences in the buffer via inverse reinforcement learning. Specifically, is there a connection between RAPID and intrinsic rewards from the view of inverse reinforcement learning? Can RAPID be combined with intrinsic rewards to improve the performance? We hope that RAPID can facilitate future research to understand these questions.
6 CONCLUSIONS AND FUTURE WORK
This work studies how we can reward good exploration behaviors in procedurally-generated environments. We hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors. To verify our hypothesis, we design RAPID, a practical algorithm that learns to reproduce past good exploration behaviors with a ranking buffer. Experiments on benchmark hard-exploration procedurally-generated environments show that RAPID significantly outperforms state-of-the-art intrinsic reward methods in terms of sample efficiency and final performance. Further analysis shows that RAPID has robust performance across different configurations, even without extrinsic rewards. Moreover, experiments on a 3D navigation task and a subset of sparse MuJoCo tasks show that RAPID could generalize to continuous state/action spaces, showing the promise in rewarding good episode-level exploration behaviors to encourage exploration. In the future, we will extend our idea and test it in more challenging procedurally-generated environments. We will also explore the possibility of combining the episode-level exploration bonus with intrinsic rewards. Last, we will try other imitation learning techniques beyond behavior cloning.
A HYPERPARAMETERS AND NEURAL NETWORK ARCHITECTURE
For the proposed RAPID, the common hyperparameters are summarized in Table 2. RAPID is based on the PPO implementation from OpenAI baselines1. The nstep is 128 for all MiniGrid environments and MuJoCo environments, and 512 for MiniWorld-Maze-S5. For MuJoCo environments, we report the results with only imitation loss since we find that the policy loss of PPO will harm the performance in the episodic reward setting. The learning rate is 10−4 for all MiniGrid environments and MiniWorld-Maze-S5, and 5× 10−4 for MuJoCo environments. Since MiniWorld-Maze-S5 and MuJoCo environments have continuous state space, we set w2 = 0. In practice, in MiniWorldMaze-S5, we observe that counting the states will usually lead to memory error because it is not likely to encounter the same state again in MiniWorld-Maze-S5 (COUNT is excluded in comparison due to memory issues). Note that it is possible to use pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017), which we will study in our future work. For MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0 and MiniWorld-Maze-S5, the update frequency of imitation learning is linearly annealed to 0 throughout the training process. For other environments, the imitation learning is performed after each episode. We keep all other hyperparameters of PPO as default and use the default CNN architecture provided in OpenAI baselines for MiniWorld-Maze-S5 and MLP with 64-64 for other environments. The PPO also uses the same hyperparameters. We summarize the state space of each environment and how the local and global scores are computed in Table 3.
For RANDOM, we implement the two networks with 64-64 MLP. For CURIOSITY and RIDE, we use 64-64 MLP for state embedding model, forward dynamics model and inverse dynamics model, respectively. However, with extensive hyperparameters search, we are not able to reproduce the RIDE results in MiniGrid environments even with the exactly same neural architecture and hyperparameters as suggested in (Raileanu & Rocktäschel, 2020). Thus, we use the authors’ implementation2 (Raileanu & Rocktäschel, 2020), which is well-tuned on MiniGrid tasks. Our reported result
1https://github.com/openai/baselines 2https://github.com/facebookresearch/impact-driven-exploration
is on par with that reported in (Raileanu & Rocktäschel, 2020). For SIL and AMIGO, we use the implementations from the authors34 with the recommended hyperparameters.
For the methods based on intrinsic rewards, i.e., RIDE,, AMIGO CURIOSITY, RANDOM, and COUNT, we search intrinsic reward coefficient from {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}. For RIDE, the intrinsic reward coefficient is 0.1 for MiniGrid-KeyCorridorS2R3-v0, MiniGridKeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.5 for MiniGrid-MultiRoomN7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGrid-MultiRoom-N12S10-v0. For AMIGO and CURIOSITY, the intrinsic reward coefficient is 0.1 for all the environments. For COUNT, the intrinsic reward coefficient is 0.005 for all the environments. For RANDOM, the intrinsic reward coefficient is 0.001 for all the environemnts. We also tune the entropy coefficient of RIDE from {0.001, 0.0005}, as suggested by the original paper. The entropy coefficient is 0.0005 for MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.001 for MiniGrid-MultiRoom-N7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGridMultiRoom-N12S10-v0.
The w0, w1, and w2 are selected as follows. We first run RAPID on MiniGrid-MultiRoom-N7S8v0. We choose this environment because it is not too hard nor too easy, so we expect that it is representative among all the considered MiniGrid environments. We fix w0 to be 1, and select w1 and w2 from {10−4, 10−3, 10−2, 10−1, 100, 101, 102, 103, 104}, that is, there are 9 × 9 = 81 combinations. Note that we can fix w0 because only the relative ranking matters in the ranking buffer. The selected w1 and w2 are then directly used in other environments without tuning. We further conduct a sensitivity analysis on MiniGrid-MultiRoom-N7S8-v0 in Figure 8. We observe that RAPID has good performance in a very wide range of hyperparameter choices.
B ENVIRONMENTS
B.1 MINIGRID ENVIRONMENTS
MiniGrid5 is a suite of light-weighted and fast gridworld environments with OpenAI gym interfaces. The environments are partially observable with a grid size of N ×N . Each tile in the gird can have at most one object, that is, 0 or 1 object. The possible objects are wall, floor, lava, door, key, ball, box and goal. Each object will have an associated color. The agent can pick up at most one object (ball or key). To open a locked door, the agent must use a key. Most of the environments in
3https://github.com/junhyukoh/self-imitation-learning 4https://github.com/facebookresearch/adversarially-motivated-intrinsic-goals 5https://github.com/maximecb/gym-minigrid
MiniGrid are procedurally-generated, i.e. a different grid will be sampled in each episode. Figure 9 shows the grids in four different episodes. The procedurally-generated nature makes training RL challenging because the RL agents need to learn the skills that can generalize to different grids. The environments can be easily modified so that we can create different levels of the environments. For example, we can modify the number of rooms and the size of each room to create hard-exploration or easy-exploration environments. The flexibility of MiniGrid environments enables us to conduct systematic comparison of algorithms under different difficulties.
The original observations of MiniGrid are dictionaries, which consist of a image field providing a partially observable view of the environment, and a mission field describing the goal with texts. In our experiments, we use ImgObsWrapper which only keeps the image field. The environment uses a compact encoding with 3 input values per visible grid cell. Note that the cells behind the wall or unopened doors are invisible.
The possible actions in MiniGrid are (1) turn left, (2) turn right, (3) move forward, (4) pick up an object, (5) drop the carried object, (6) open doors/interact with objects, and (7) done. The agent will remain in the same state if the action is not legal. For example, if there is no object in front of the agent, the agent will remain in the same state when performing action (4) pick up an object.
We focus on MultiRoom-NX-SY and KeyCorridor-SX-RY, where X and Y are hyperparameters specifying the number of rooms or the room sizes in the environments. With larger X and Y, the environments will become more difficult to solve due to the sparse rewards. Figure 10 shows the environments (with different difficulties) in this work. In MultiRoom-NX-SY, the agent needs to navigate the green tile in the last room. If the agent successfully navigates the goal, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent. In KeyCorridor-SX-RY, the agent needs to pick up the key, use the key to open the locked door, and pick up the ball. Similarly, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent if it picks up the ball. Both environments are extremely difficult to solve using RL alone due to the sparse rewards.
B.2 MINIWORLD MAZE ENVIRONMENT
MiniWorld6 is a minimalistic 3D interior environment simulator. Similar to Minigrid, we can easily create environments with different levels. We focus on MiniWorld-Maze, where the agent is asked to navigate the goal through a procedurally-generated maze. The agent has a first-person partially observable view of the environment. This environment is extremely difficult to solve due to sparse reward, long-horizon, and the procedurally-generated nature of the environments.
6https://github.com/maximecb/gym-miniworld
In this work, we focus on MiniWorld-Maze-S5, a variant of the MiniWorld Maze environment with 5×5 tiles. Figure 11 shows the top view of the mazes in four different episodes. In each episode, the agent and the goal are initialized in the top-left and bottom-right corners of the map, respectively. In this way, we ensure that the agent and the goal are far away enough so that the agent can not easily see the goal without exploration. A random maze will be generated in each episode. There are three possible actions: (1) move forward, (2) turn left, and (3) turn right. The agent will move 0.4× tile length if moving forward, where tile length is the length of one side of a tile. The agent will rotate 22.5 degrees if turning right/left. The time budget of each episode is 600. The agent will not receive any positive reward if it can not navigate the goal under the time budget.
B.3 SPARSE MUJOCO ENVIRONMENTS
Mujoco is a physical engine for continuous control (Todorov et al., 2012). Figure 12 shows the rendering of the four MuJoCo tasks used in our work. The original MuJoCo environments use dense rewards, i.e., a well-defined reward is given in each timestep. However, in many scenarios, well-defined dense rewards may be not available. We consider a variant of MuJoCo task using only episodic reward. Specifically, the original rewards are accumulated and only an episodic reward is given in the final timestep. The agent will receive a 0 reward in all the other timesteps. This sparse setting makes the tasks much more challenging to solve. Contemporary RL algorithms will usually suffer from the sparse rewards and deliver unsatisfactory performance. We aim to use these sparse variants to study whether our RAPID can effectively capture the sparse rewards.
C ABLATIONS
Figure 13 shows the learning curves of RAPID against different ablations. Without local scores, we observe significant performance drop on challenging environments in terms of sample efficiency and final performance, i.e., on MultiRoom-N7-S8, MultiRoom-N10-S10, MultiRoom-N12-S10, KeyCorridor-S3-R3, and KeyCorridor-S4-R3. This suggests that the proposed local score plays an important role in encouraging exploration. We also observe that without buffer, the agent fails to learn a good policy. Naively using the proposed scores as intrinsic reward will harm the performance. Using the extrinsic reward and the global score to rank the episodes also contribute in the challenging environments, such as KeyCorridor-S4-R3. Therefore, the three proposed scoring methods may be complementary and play different roles in different environments.
D FULL ANALYSIS RESULTS
D.1 IMPACT OF THE NUMBER OF UPDATE STEPS
D.2 IMPACT OF BUFFER SIZE
D.3 PURE EXPLORATION ON MINIGRID
D.4 LOCAL EXPLORATION SCORE OF PURE EXPLORATION ON MINIGRID
F LEARNING CURVES WITH LARGE ROOMS
E LEARNING CURVES WITH MORE ROOMS
G ABLATION AND ANALYSIS OF MINIWORLD-MAZE
H COMPARISON WITH STORING ENTIRE EPISODES IN THE BUFFER
A variant of RAPID is to keep the entire episodes in the buffer instead of state-action pairs. The intuition of keeping the whole episode is that it is possible that a particular state-action pair may only be good in terms of exploration, in the context of the rest of the agent’s trajectory in that episode. To test this variant, we force the buffer to keep the entire episode by allowing the episode at the end of the buffer to exceed the buffer size. For example, given that the buffer size is 5000, if the length of the end episode is 160 and the number of state-action pairs in the buffer is 5120, we do not cut off the end episode and exceptionally allow the buffer to store 5120 state-action pairs. In this way, we ensure that the entire episodes are kept in the buffer. We run this experiment on MiniGrid-MultiRoom-N7-S8-v0 (Figure 25). We do not observe a clear difference in the learning curves.
I RESULTS ON PROCEDURALLY-GENERATED SWIMMER
The standard MuJoCo environments are singleton. Here, we modify the Swimmer-v2 environment to make it procedurally-generated. Specifically, we make the environment in each episode different by modifying the XML configuration file of MuJoCo engine in every new episode. We consider two environmental properties, i.e., density and viscosity. Density is used to simulate lift and drag forces, which scale quadratically with velocity. Viscosity is used to simulate viscous forces, which scale linearly with velocity. In the standard Swimmer-v2 environment, the density and the viscosity are fixed to 4000 and 0.1, respectively. In the first experiment, we uniformly sample the density in [2000, 4000] in each new episode to make it procedurally-generated, denoted as Density-Swimmer. Similarly, in the second environment, we uniformly sample the velocity in [0.1, 0.5], denoted as Velocity-Swimmer. These two variants make RL training more challenging since the agent needs to learn a policy that can generalize to different densities and velocities. Similarly, we accumulate the rewards to the final timestep of an episode to make the environments sparse. Both environments are provided in our code for reproducibility. The results are reported in Figure 26. We observe that all methods learn slower on these two variants. For example, RAPID only achieves around 190 return in Density-Swimmer, while RAPID achieves more than 200 return in Swimmer. Similarly, SIL reaches around 100 return after around 5×106 timesteps in Density-Swimmer, while it achieves around 100 return after around 3 × 106 timesteps in Swimmer. Nevertheless, RAPID can discover very good policies even in these challenging procedurally-generated settings.
J DISCUSSIONS OF ANNEALING
In our experiments, we find that annealing the updating step to 0 sometimes helps improve the sample efficiency. However, in some environments, annealing will have a negative effect on the final performance. To show the effect of annealing, we plot learning curves with or without annealing on MiniGrid-KeyCorridorS3R2-v0 and MiniGrid-MultiRoom-N12-S10-v0 in Figure 27. We observe that annealing can help in KeyCorridor-S3-R2. The possible explanation is that, after the agent navigates the goal, it has access to the reward signal. the PPO objective is sufficient to train a good policy with the extrinsic rewards. The imitation loss may harm the performance since it could be better to perform exploitation rather than exploration at this stage. However, in MultiRoom-N12S10, annealing will harm the performance. In particular, the performance drops significantly in the final stage when the updating frequency of imitation loss approaches zero. The possible reason is that MultiRoom-N12-S10 needs very deep exploration to find the goal. Even though the PPO agent can navigate the goal, it may easily suffer from insufficient exploration when it encounters a new environment. As a result, the agent needs to keep exploring the environment in order to generalize. Introducing imitation loss will ensure that the agent keeps exploring the environments. | 1. What is the main contribution of the paper regarding reinforcement learning agents?
2. What are the strengths of the proposed method, particularly in its ability to generalize and provide local and global rewards?
3. Do you have any concerns or doubts about the underlying assumptions made by the method, especially in relation to sparse rewards and deceptive environments?
4. How does the reviewer assess the significance and relevance of the paper's contributions in the context of previous research and benchmarks?
5. Are there any minor issues or suggestions regarding citations and future applications of the proposed method? | Review | Review
Summary: This paper focuses on building reinforcement learning agents for procedurally generated environments. In particular it presents a method RAPID to intrinsically reward previously seen good exploration behaviors by attempting to imitate them in the future. Strong results are presented across multiple procedurally generated environments.
Pros:
Procedurally generated environments provide a rigorous test of an agent's ability to generalize as exploration strategies learned cannot be reused as a whole. To counter this, they operate on the hypothesis that high rewarding trajectories are more likely to contain more generalizable exploration behaviors. The break down of this reward into local (per episode) and global extrinsic rewards gives the agent some idea of areas of the game it historically performs poorly on.
Given that the idea itself is simple (yet effective!), I appreciate that most of the main paper is dedicated to experiments - with each set of experiments designed to answer a particular research question. This make it easily accessible and digestible - answering many of the doubts I had when reading through the intro and methodology sections.
Cons: I have a couple of primary concerns mostly dealing with the underlying assumptions made.
This method rewards good exploration behavior historically and as such depends on the ability of the agent to discover such trajectories in the first place.
High extrinsic reward implies good exploration behavior. Both of these assumptions likely do not hold in environments with sparse/ill-placed/deceptive rewards, challenges that environments such as Montezuma's Revenge/Net Hack/Interactive Fiction games are notorious for. In such environments, it is not only difficult to discover good exploration behaviors but also the overall extrinsic rewards themselves often lead to dead ends and globally suboptimal trajectories. The global score, being effectively count-based, alleviates some of this, but it is likely that this form of intrinsic that is based on extrinsic reward will likely fail in such scenarios. It would strengthen the paper to see positive results in such an environment.
Minor: A few of citations to the ProcGen benchmark (Cobbe at al. 2020 https://arxiv.org/abs/1912.01588) and TextWorld (Cote et al. 2018 https://arxiv.org/abs/1806.11532), and the NetHack Learning Environment (Kuttler et al. https://arxiv.org/abs/2006.13760) are missing, being frameworks for looking RL in procedurally generated settings (and also good candidates for future environments to try RAPID in!!).
Regardless, this paper provides a valuable contribution is studying exploration in procedurally generated environments - with the core idea of rewarding historically good behavior being one well proven in the past. The motivation is clear and the experiments well designed - proving that there are indeed environments where this idea works on, within the limits of the assumptions made. I would see this work accepted. |
ICLR | Title
Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Abstract
Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid.
1 INTRODUCTION
Deep reinforcement learning (RL) is widely applied in various domains (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Lillicrap et al., 2015; Andrychowicz et al., 2017; Zha et al., 2019a; Liu et al., 2020). However, RL algorithms often require tremendous number of samples to achieve reasonable performance (Hessel et al., 2018). This sample efficiency issue becomes more pronounced in sparse reward environments, where the agent may take an extremely long time before bumping into a reward signal (Riedmiller et al., 2018). Thus, how to efficiently explore the environment under sparse reward remains an open challenge (Osband et al., 2019).
To address the above challenge, many exploration methods have been investigated and demonstrated to be effective on hard-exploration environments. One of the most popular techniques is to use intrinsic rewards to encourage exploration (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Guo et al., 2016; Zheng et al., 2018; Du et al., 2019). The key idea is to give intrinsic bonus based on uncertainty, e.g., assigning higher rewards to novel states (Ostrovski et al., 2017), or using the prediction error of a dynamic model as the intrinsic reward (Pathak et al., 2017). While many intrinsic reward methods have demonstrated superior performance on hard-exploration environments, such as Montezuma’s Revenge (Burda et al., 2018b) and Pitfall! (Badia et al., 2019), most of the previous studies use the same singleton environment for training and testing, i.e., the agent aims to solve the same environment in each episode. However, recent studies show that the agent trained in this way is susceptible to overfitting and may fail to generalize to even a slightly different environment (Rajeswaran et al., 2017; Zhang et al., 2018a). To deal with this issue, a few procedually-generated environments
are designed to test the generalization of RL, such as (Beattie et al., 2016; Chevalier-Boisvert et al., 2018; Nichol et al., 2018; Côté et al., 2018; Cobbe et al., 2019; Küttler et al., 2020), in which the agent aims to solve the same task, but a different environment is generated in each episode.
Unfortunately, encouraging exploration in procedually-generated environments is a very challenging task. Many intrinsic reward methods, such as count-based exploration and curiosity-driven exploration, often fall short in procedually-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020). Figure 1a shows a motivating example of why count-based exploration is less effective in procedually-generated environments. We consider a 1D grid-world, where the agent (red) needs to explore the environment to reach the goal within 7 timesteps, that is, the agent needs to move right in all the steps. While count-based exploration assigns reasonable intrinsic rewards in singleton setting, it may generate misleading intrinsic rewards in procedually-generated setting because visiting a novel state does not necessarily mean a good exploration behavior. This issue becomes more pronounced in more challenging procedually-generated environments, where the agent is not likely to visit the same state more than once. To tackle the above challenge, this work studies how to reward good exploration behaviors that can generalize to different environments.
When a human judges how well an agent explores the environment, she often views the agent from the episode-level instead of the state-level. For instance, one can easily tell whether an agent has explored a maze well by looking into the coverage rate of the current episode, even if the current environment is different from the previous ones. In Figure 1b, we can confidently tell that the episode in the right-hand-side is a good exploration behavior because the agent achieves 0.875 coverage rate. Similarly, we can safely conclude that the episode in the left-hand-side does not explore the environment well due to its low coverage rate. Motivated by this, we hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors than state-level intrinsic rewards in procedually-generated setting.
To verify our hypothesis, we propose exploration via Ranking the Episodes (RAPID). Specifically, we identify good exploration behaviors by assigning episodic exploration scores to past episodes. To efficiently learn the past good exploration behaviors, we use a ranking buffer to store the highlyscored episodes. The agent then learns to reproduce the past good exploration behaviors with imitation learning. We demonstrate that RAPID significantly outperforms the state-of-the-art intrinsic reward methods on several procedually-generated benchmarks. Moreover, we present extensive ablations and analysis for RAPID, showing that RAPID can well explore the environment even without extrinsic rewards and could be generally applied in tasks with continuous state/action spaces.
2 EXPLORATION VIA RANKING THE EPISODES
An overview of RAPID is shown in Figure 2. The key idea is to assign episodic exploration scores to past episodes and store those highly-scored episodes into a small buffer. The agent then treats the episodes in the buffer as demonstrations and learns to reproduce these episodes with imitation learning. RAPID encourages the agent to explore the environment by reproducing the past good
exploration behaviors. This section introduces how we define the episodic exploration score for procedurally-generated environments and how the proposed method can be combined with state-ofthe-art reinforcement learning agents.
2.1 EPISODIC EXPLORATION SCORE
Each episode is scored from both local and global perspectives. The local score is a per-episode view of the exploration behavior, which measures how well the current episode itself explores the environment. The global score provides a long-term and historical view of exploration, i.e., whether the current episode has explored the regions that are not well explored in the past. Additionally, we consider extrinsic reward, which is the episodic reward received from the environment. It is an external criterion of exploration since a high extrinsic reward often suggests good exploration in sparse environments. The overall episodic exploration score is obtained by the weighted sum of the above three scores. We expect that these scores can model different aspects of exploration behaviors and they are complementary in procedurally-generated environments.
Local Score. The intuition of the local exploration bonus is that the agent is expected to visit as many distinct states as possible in one episode. In our preliminary experiments on MiniGrid, we observe that many poor exploration episodes repeatedly visit the same state. As a result, the agent may get stuck in a well-known region but never reach out to unexplored regions. From a local view, we can quantify this exploration behavior based on the diversity of the visited states. Specifically, we define the local exploration score as
Slocal = Ndistinct Ntotal , (1)
where Ntotal is the total number of states in the episode, and Ndistinct is the number of distinct states in the episode. Intuitively, optimizing the local score will encourage the agent to reach out to the unexplored regions in the current episode. In an ideal case, the episodes that never visit the same state twice will receive a local bonus of 1. There could be various ways to extend the above definition to continuous state space. In this work, we empirically define the local score in continuous state space as the mean standard deviation of all the states in the episode:
Slocal =
∑l i=1 std(si)
l , (2)
where l is the dimension of the state, and std(si) is the standard deviation along the i-dimension of the states in the episode. Intuitively, this score encourages the episodes that visit diverse states.
Global Score. The global score is designed to model long-term exploration behavior. While a different environment will be generated in each episode in the procedurally-generated setting, the environments in different episodes usually share some common characteristics. For example, in the
Algorithm 1 Exploration via Ranking the Episodes (RAPID) 1: Input: Training steps S, buffer size D, RL rollout steps T 2: Initialize the policy πθ, replay buffer D 3: for iteration = 1, 2, ... until convergence do 4: Execute πθ for T timesteps 5: Update πθ with RL objective (also update value functions if any) 6: for each generated episode τ do 7: Compute episodic exploration score Sτ based on Eq. (4) 8: Give score Sτ to all the state-action pairs in τ and store them to the buffer 9: Rank the state-action pairs in D based on their exploration scores 10: if D.length > D then 11: Discard the state-action pairs with low scores so that D.length = D 12: end if 13: for step = 1, 2, .., S do 14: Sample a batch from D and train πθ using the data in D with behavior cloning 15: end for 16: end for 17: end for
MiniGrid MultiRoom environment (Figure 3a), although a different maze will be generated in a new episode, the basic building blocks (e.g., walls, doors, etc.), the number of rooms, and the size of each room will be similar. From a global view, the agent should explore the regions that are not well explored in the past. Motivated by count-based exploration, we define the global score as
Sglobal = 1
Ntotal ∑ s 1√ N(s) , (3)
where N(s) is the state count of s throughout the training process, that is, the global score is the mean count-based intrinsic reward of all the states in the episode.
Episodic Exploration Score. Suppose Sext is the total extrinsic reward of an episode received from the environment . The episodic exploration score is obtained by the weighted sum of the extrinsic reward Sext, the local score Slocal, and the global score Sglobal:
S = w0Sext + w1Slocal + w2Sglobal, (4) where w0, w1 and w2 are hyperparameters. We speculate that the three scores are complementary and the optimal w0, w1 and w2 could vary in different environments. For example, a higher w1 will give a higher weight to the per-episode view of exploration behavior. A higher w2 will focus more on whether the current episode has explored the past unexplored regions. Nevertheless, we find that a single set of w0, w1 and w2 works well across many tasks.
Comparison with Prior Work. Several papers have studied episode-level feedback to improve the sample efficiency of RL. To the best of our knowledge, the existing work mainly focuses on learning a meta-policy based on episodic rewards (Xu et al., 2018; Zha et al., 2019b), or imitating the episodes with high episodic rewards (Guo et al., 2018; Srivastava et al., 2019). However, in very sparse environments, it is not likely to receive a non-zero reward without exploration. These methods are not suitable for very sparse environments since episodic rewards could be always zero. In this work, we propose local and global episodic scores to quantify the exploration behaviors and encourage the agent to visit distinct states from per-episode and long-term views.
2.2 REPRODUCING PAST GOOD EXPLORATION BEHAVIORS WITH IMITATION LEARNING
A straightforward idea is to treat the episodic exploration score as a reward signal and assign it to the last timestamp of the episode. In this way, the agent can be updated with any reinforcement learning objective. However, this strategy is not very effective with three potential limitations. First, the reward remains sparse so that the reinforcement learning agents may not be able to effectively exploit the reward signals. Second, the intrinsic exploration bonus may introduce bias in the objective. While the local score and global score may encourage exploration at the beginning stage, they are
not true objectives and may degrade the final performance. Third, it only uses a good episode once. In fact, many past good episodes can be exploited many times to improve sample efficiency.
To overcome the above limitations, we propose to use a small ranking buffer to store past good state-action pairs and then use imitation objective to enhance exploration. Specifically, we assign the same episodic exploration score to all the state-action pairs in an episode and rank all the stateaction pairs based on scores. In addition to the original RL objective, we employ behavior cloning, the simplest form of imitation learning, to encourage the agent to reproduce the state-action pairs in the buffer. This simple ranking buffer can effectively exploit the past good experiences since a good state-action pair could be sampled more than once. The procedure is summarized in Algorithm 1. A possible variant is to keep the entire episodes in the buffer. We have tried this idea and do not observe a clear difference between this variant and Algorithm 1 (see Appendix H for more discussions).
3 EXPERIMENTS
The experiments are designed to answer the following research questions: RQ1: how is RAPID compared with the state-of-the-art intrinsic reward methods on benchmark procedurally-generated environments (Section 3.2)? RQ2: how will RAPID perform if removing one of the scores (i.e., the local score, the global score, and the extrinsic reward) or the buffer (Section 3.3)? RQ3: how will the hyperparameters impact the performance of RAPID (Section 3.3)? RQ4: can RAPID explore the environment without extrinsic reward (Section 3.3)? RQ5: how will RAPID perform with larger and more challenging grid-world (Section 3.3)? RQ6: can RAPID generalize to 3D navigation task and continuous state/action space, such as MuJoCo (Section 3.4)?
3.1 ENVIRONMENTS AND BASELINES
Figure 3 shows the rendering of procedurally-generated environments used in this work. Following the previous studies of exploration in procedurally-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020), we mainly focus on MiniGrid environments (ChevalierBoisvert et al., 2018), which provide procedurally-generated grid-worlds. We consider two types of hard exploration tasks: MultiRoom-NX-SY, where X and Y indicate the number of rooms and the room size, respectively, and KeyCorridor-SX-RY, where X and Y indicate the room size and the number of rows, respectively. A large X or Y suggests a larger environment, which will be more difficult to explore. The agent has a partial view of the environment and needs to navigate the green block or the red ball under sparse rewards. These tasks are challenging because the agent needs to explore many rooms in order to receive a reward signal so that standard RL algorithms will typically fail. To test whether RAPID can generalize to continuous state space, we consider MiniWorld Maze (Chevalier-Boisvert, 2018), whose observation space is 60 × 80 images. Similarly, the agent in MiniWorld Maze has a 3D partial view of the environment and needs to navigate the red box. The maze is procedurally-generated, i.e., a different maze will be generated in each episode. We further study RAPID in continuous state and action spaces on sparse MuJoCo tasks (Todorov et al., 2012; Brockman et al., 2016). More details of the above environments are described in Appendx B.
While exploration has been extensively studied in the literature, very few methods are designed for procedurally-generated environments. To better understand the performance of RAPID, we consider the baselines in the following categories. Traditional Methods: we consider several popular methods designed for singleton environments, including count-based exploration (COUNT) (Bellemare et al., 2016), Random Network Distillation Exploration (RANDOM) (Burda et al., 2018b),
.
and curiosity-driven exploration (CURIOSITY) (Pathak et al., 2017). Methods for ProcedurallyGenerated Environments: we consider two state-of-the-art exploration methods, including ImpactDriven Exploration (RIDE) (Raileanu & Rocktäschel, 2020), and exploration with Adversarially Motivated Intrinsic Goals (AMIGO) (Campero et al., 2020). Self-Imitation Methods: self-imitation learning (SIL) (Oh et al., 2018) also exploits past good experiences to encourage exploration. We further include PPO (Schulman et al., 2017) as a reinforcement learning baseline. All the algorithms are run the same number timesteps for a fair comparison. More details are provided in Appendix A.
3.2 MINIGRID RESULTS
To study RQ1, we plot the learning curves of RAPID and the baselines on MiniGrid benchmarks in Figure 4. We can make three observations. First, RAPID performs significantly better than the state-of-the-art methods designed for singleton or procedurally-generated environments in terms of sample efficiency and final performance. For example, on MultiRoom-N7-S8, RAPID is at least 10 times more sample efficient than RIDE. On KeyCorridor-S4-R3, RAPID is the only algorithm that successfully navigates the goal within 20 million timesteps. Second, the traditional intrinsic reward methods perform well on easy procedurally-generated environments but perform poorly on hardexploration tasks. For example, COUNT performs as well as RAPID on KeyCorridor-S3-R2 but fails on more challenging tasks, such as MultiRoom-N7-S8. This suggests that traditional methods may fall short in challenging procedurally-generated environments, which is consistent with the results in previous work (Raileanu & Rocktäschel, 2020). Third, RAPID outperforms SIL across all the tasks. Recall that both RAPID and SIL learn to reproduce past good experiences. While SIL reproduces experiences with high rewards, RAPID additionally considers local and global exploration scores. This verifies the effectiveness of the episodic exploration scores.
3.3 ABLATIONS AND ANALYSIS
To study RQ2, we perform ablation studies on RAPID. Specifically, we remove one of the local score, the global score, and the extrinsic reward, and compare each ablation with RAPID. Besides, we consider a variant that directly uses the episodic exploration score as the reward signal without buffer. To demonstrate the effectiveness of the ranking, we further include a baseline that uses the replay buffer without ranking. Table 1 shows the results. We observe that the local score contributes the most to the performance. While most existing exploration methods reward exploration globally, the global score could be not aware of generalization, which may explain the unsatisfactory performance of COUNT and CURIOSITY. While the global score seems to play a relatively small role as compared to local score in most MiniGrid environments, it plays an important role in KeyCorridorS4-R3. A possible reason is that the local score is a too weak signal in KeyCorridor-S4-R3, and the global score may help alleviate this issue since it provides some historical information. We also observe that the agent fails in almost all the tasks without buffer or ranking. This suggests that the ranking buffer is an effective way to exploit past good exploration behaviors.
To study RQ3, we plot the learning curves with different training steps S and buffer size D in Figure 5a and 5b. We observe that larger number of training steps usually leads to faster learning speed at the beginning stage but may result in sub-optimal performance. For the buffer size, we observe that a large buffer will lead to slow learning speed with more stable performance. We speculate that there is a trade-off between learning speed and the final performance. In our experiments, we set training steps S to be 5 and buffer size D to be 10000, respectively, across all the tasks.
To investigate RQ4, we remove extrinsic rewards and show the learning curves of RAPID and the baselines in Figure 5c with the corresponding local exploration scores in Figure 5d. We make two interesting observations. First, without extrinsic rewards, RAPID can deliver reasonable performance. For example, on MultiRoom-N12-S10, RAPID achieves nearly 0.5 average return with pure exploration, compared with around 0.6 if using extrinsic rewards. We look into the episodes generated by RAPID with pure exploration and observe that, in most cases, the agent is able to reach the goal but struggles to find the optimal path to the goal. This is expected because the extrinsic reward is obtained by whether the agent can reach the goal subtracting some penalties of the timesteps spent. The extrinsic reward can encourage the agent to choose the shortest path to reach the goal. For other baselines, we observe that only RIDE and AMIGO can occasionally reach the goal. Second, we observe that some baselines, such as RIDE, are indirectly maximizing the local exploration scores. For example, we observe that the local exploration score increases for RIDE on MultiRoom-N12S10. However, the local score of RAPID drops after about 10 million timesteps. This suggests the episode-level score may better align with good exploration behaviors. A possible explanation for the superiority of RAPID is that it is designed to directly optimize episodic exploration score.
To answer RQ5, we focus on MultiRoom-NX-SY and ambitiously vary X and Y to make the environments more challenging. Specifically, we fix Y to be 4 and vary X in the left-hand-side of Figure 6, and fix X to be 4 and vary Y in the middle of Figure 6. Compared with RIDE, our RAPID delivers stronger performance on more challenging mini-worlds.
3.4 MINIWORLD MAZE AND SPARSE MUJOCO RESULTS
For RQ6, the learning curves on 3D Maze are shown in the right-hand-side of Figure 6. RAPID performs significantly better than the baselines. We also conduct experiments on a subset of sparse MuJoCo tasks (Figure 7). The results show that RAPID improves over PPO and SIL. Interestingly, RAPID achieves more than 200 average return on Swimmer, which surpasses some previous results reported on the dense reward counterpart (Duan et al., 2016; Henderson et al., 2018; Lai et al., 2020). Note that the MuJoCo environments are singleton. We use these environments solely to test whether RAPID can deal with continuous state/action spaces. We have also tried adapting Swimmer to be procedurally-generated and observe similar results (Appendix I)
4 RELATED WORK
Count-Based and Curiosity-Driven Exploration. These two classes of intrinsic motivations are the most popular and proven to be effective in the literature. Count-based exploration encourages the agent to visit the novel states, which is first studied in tabular setting (Strehl & Littman, 2008) and has been recently extended to more complex state space (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Martin et al., 2017; Machado et al., 2018; Osband et al., 2019). Curiositydriven exploration learns the environment dynamics to encourage exploration (Stadie et al., 2015; Pathak et al., 2017; Sukhbaatar et al., 2018; Burda et al., 2018a). However, these methods are designed for singleton environments and often struggle to generalize to procedurally-generated environments (Raileanu & Rocktäschel, 2020).
Exploration for Procedurally-Generated Environments. Several recent studies have discussed the generalization of reinforcement learning (Rajeswaran et al., 2017; Zhang et al., 2018a;b; Choi et al., 2018) and designed procedurally-generated environments to test the generalization of reinforcement learning (Beattie et al., 2016; Nichol et al., 2018; Küttler et al., 2020). More recent papers show that traditional exploration methods fall short in procedurally-generated environments and address this issue with new exploration methods (Raileanu & Rocktäschel, 2020; Campero et al., 2020). This work studies a new perspective of exploration bonus in episode-level and achieves significantly better performance than the previous methods on procedurally-generated benchmarks.
Experience Replay Buffer and Self-Imitation Learning. Experience replay mechanism is widely used and is initially designed to stabilize deep reinforcement learning (Lin, 1992; 1993; Mnih et al., 2015; Zhang & Sutton, 2017; Zha et al., 2019b; Fedus et al., 2020). Self-Imitation Learning extends the experience replay buffer and uses it to store past good experiences (Oh et al., 2018; Guo et al., 2018; 2019; Gangwani et al., 2018). However, Self-Imitation only exploits highly rewarded experiences and thus is not suitable for very sparse environments. Our work incorporates local and global scores to encourage past good exploration behaviors in sparse procedurally-generated environments.
Episodic Memory. Inspired by the episodic memory of the animals, some studies propose to use episodic memory to enhance sample efficiency (Blundell et al., 2016; Pritzel et al., 2017). The key idea is to memorize and replay the good episodic experiences. More recently, episodic mem-
ory is used to generate intrinsic rewards to guide exploration (Savinov et al., 2018). Never Give Up (NGU) (Badia et al., 2019) further considers local and global novelties in the intrinsic rewards. Our work differs from NGU in that we approach the sample efficiency from a different angle by treating each episode as a whole and using a ranking mechanism to select good experiences. Thus, good experiences can be exploited multiple times to improve sample efficiency.
5 LIMITATIONS AND DISCUSSIONS
RAPID is designed to study our hypothesis that imitating episode-level good exploration behaviors can encourage exploration in procedurally-generated environments. In the presented algorithm, we have made some simple choices. In order to make RAPID more competitive to the state-of-the-art methods in more complicated environments, in this section, we highlight some limitations of RAPID and the potential research directions.
First, in our work, the local and global scores are mainly calculated on low-dimensional state space. We have not yet tested the scores on more complex procedurally-generated environments, such as ProcGen benchmark (Cobbe et al., 2019), TextWorld (Côté et al., 2018), and NetHack Learning Environment (Küttler et al., 2020), some of which require high-dimensional image inputs. In this context, count-based scores may not be well-suited. Learning the state embedding (Raileanu & Rocktäschel, 2020) and pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017) could be the potential ways to address this issue. Moreover, high extrinsic rewards may not necessarily mean good exploration behaviors in these environments. The applicability of RAPID in these environments needs future research efforts.
Second, the ranking buffer greedily selects the highly-reward state-action pairs, which is the key to boosting the performance. However, with this greedy strategy, some highly-ranked state-action pairs may never be forgotten even if they are out-of-date, which may introduce bias. As shown in Figure 5b, a larger buffer size will lead to more stable performance but will sacrifice the sample efficiency. In more complicated environments, selecting the proper buffer size could be non-trivial. Some forgetting mechanism (Novati & Koumoutsakos, 2019) and meta-learning strategies (Zha et al., 2019b) could be the potential directions to better manage the buffer.
Third, in our implementation, we employ behavior cloning, the simplest form of imitation learning, to train the policy. In recent years, many more advanced imitation learning methods have been proposed (Ho & Ermon, 2016; Hester et al., 2017). In particular, the idea of inverse reinforcement learning (Abbeel & Ng, 2004) is to learn reward functions based on the demonstrations. An interesting future direction is to study whether we can derive intrinsic rewards based on the experiences in the buffer via inverse reinforcement learning. Specifically, is there a connection between RAPID and intrinsic rewards from the view of inverse reinforcement learning? Can RAPID be combined with intrinsic rewards to improve the performance? We hope that RAPID can facilitate future research to understand these questions.
6 CONCLUSIONS AND FUTURE WORK
This work studies how we can reward good exploration behaviors in procedurally-generated environments. We hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors. To verify our hypothesis, we design RAPID, a practical algorithm that learns to reproduce past good exploration behaviors with a ranking buffer. Experiments on benchmark hard-exploration procedurally-generated environments show that RAPID significantly outperforms state-of-the-art intrinsic reward methods in terms of sample efficiency and final performance. Further analysis shows that RAPID has robust performance across different configurations, even without extrinsic rewards. Moreover, experiments on a 3D navigation task and a subset of sparse MuJoCo tasks show that RAPID could generalize to continuous state/action spaces, showing the promise in rewarding good episode-level exploration behaviors to encourage exploration. In the future, we will extend our idea and test it in more challenging procedurally-generated environments. We will also explore the possibility of combining the episode-level exploration bonus with intrinsic rewards. Last, we will try other imitation learning techniques beyond behavior cloning.
A HYPERPARAMETERS AND NEURAL NETWORK ARCHITECTURE
For the proposed RAPID, the common hyperparameters are summarized in Table 2. RAPID is based on the PPO implementation from OpenAI baselines1. The nstep is 128 for all MiniGrid environments and MuJoCo environments, and 512 for MiniWorld-Maze-S5. For MuJoCo environments, we report the results with only imitation loss since we find that the policy loss of PPO will harm the performance in the episodic reward setting. The learning rate is 10−4 for all MiniGrid environments and MiniWorld-Maze-S5, and 5× 10−4 for MuJoCo environments. Since MiniWorld-Maze-S5 and MuJoCo environments have continuous state space, we set w2 = 0. In practice, in MiniWorldMaze-S5, we observe that counting the states will usually lead to memory error because it is not likely to encounter the same state again in MiniWorld-Maze-S5 (COUNT is excluded in comparison due to memory issues). Note that it is possible to use pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017), which we will study in our future work. For MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0 and MiniWorld-Maze-S5, the update frequency of imitation learning is linearly annealed to 0 throughout the training process. For other environments, the imitation learning is performed after each episode. We keep all other hyperparameters of PPO as default and use the default CNN architecture provided in OpenAI baselines for MiniWorld-Maze-S5 and MLP with 64-64 for other environments. The PPO also uses the same hyperparameters. We summarize the state space of each environment and how the local and global scores are computed in Table 3.
For RANDOM, we implement the two networks with 64-64 MLP. For CURIOSITY and RIDE, we use 64-64 MLP for state embedding model, forward dynamics model and inverse dynamics model, respectively. However, with extensive hyperparameters search, we are not able to reproduce the RIDE results in MiniGrid environments even with the exactly same neural architecture and hyperparameters as suggested in (Raileanu & Rocktäschel, 2020). Thus, we use the authors’ implementation2 (Raileanu & Rocktäschel, 2020), which is well-tuned on MiniGrid tasks. Our reported result
1https://github.com/openai/baselines 2https://github.com/facebookresearch/impact-driven-exploration
is on par with that reported in (Raileanu & Rocktäschel, 2020). For SIL and AMIGO, we use the implementations from the authors34 with the recommended hyperparameters.
For the methods based on intrinsic rewards, i.e., RIDE,, AMIGO CURIOSITY, RANDOM, and COUNT, we search intrinsic reward coefficient from {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}. For RIDE, the intrinsic reward coefficient is 0.1 for MiniGrid-KeyCorridorS2R3-v0, MiniGridKeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.5 for MiniGrid-MultiRoomN7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGrid-MultiRoom-N12S10-v0. For AMIGO and CURIOSITY, the intrinsic reward coefficient is 0.1 for all the environments. For COUNT, the intrinsic reward coefficient is 0.005 for all the environments. For RANDOM, the intrinsic reward coefficient is 0.001 for all the environemnts. We also tune the entropy coefficient of RIDE from {0.001, 0.0005}, as suggested by the original paper. The entropy coefficient is 0.0005 for MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.001 for MiniGrid-MultiRoom-N7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGridMultiRoom-N12S10-v0.
The w0, w1, and w2 are selected as follows. We first run RAPID on MiniGrid-MultiRoom-N7S8v0. We choose this environment because it is not too hard nor too easy, so we expect that it is representative among all the considered MiniGrid environments. We fix w0 to be 1, and select w1 and w2 from {10−4, 10−3, 10−2, 10−1, 100, 101, 102, 103, 104}, that is, there are 9 × 9 = 81 combinations. Note that we can fix w0 because only the relative ranking matters in the ranking buffer. The selected w1 and w2 are then directly used in other environments without tuning. We further conduct a sensitivity analysis on MiniGrid-MultiRoom-N7S8-v0 in Figure 8. We observe that RAPID has good performance in a very wide range of hyperparameter choices.
B ENVIRONMENTS
B.1 MINIGRID ENVIRONMENTS
MiniGrid5 is a suite of light-weighted and fast gridworld environments with OpenAI gym interfaces. The environments are partially observable with a grid size of N ×N . Each tile in the gird can have at most one object, that is, 0 or 1 object. The possible objects are wall, floor, lava, door, key, ball, box and goal. Each object will have an associated color. The agent can pick up at most one object (ball or key). To open a locked door, the agent must use a key. Most of the environments in
3https://github.com/junhyukoh/self-imitation-learning 4https://github.com/facebookresearch/adversarially-motivated-intrinsic-goals 5https://github.com/maximecb/gym-minigrid
MiniGrid are procedurally-generated, i.e. a different grid will be sampled in each episode. Figure 9 shows the grids in four different episodes. The procedurally-generated nature makes training RL challenging because the RL agents need to learn the skills that can generalize to different grids. The environments can be easily modified so that we can create different levels of the environments. For example, we can modify the number of rooms and the size of each room to create hard-exploration or easy-exploration environments. The flexibility of MiniGrid environments enables us to conduct systematic comparison of algorithms under different difficulties.
The original observations of MiniGrid are dictionaries, which consist of a image field providing a partially observable view of the environment, and a mission field describing the goal with texts. In our experiments, we use ImgObsWrapper which only keeps the image field. The environment uses a compact encoding with 3 input values per visible grid cell. Note that the cells behind the wall or unopened doors are invisible.
The possible actions in MiniGrid are (1) turn left, (2) turn right, (3) move forward, (4) pick up an object, (5) drop the carried object, (6) open doors/interact with objects, and (7) done. The agent will remain in the same state if the action is not legal. For example, if there is no object in front of the agent, the agent will remain in the same state when performing action (4) pick up an object.
We focus on MultiRoom-NX-SY and KeyCorridor-SX-RY, where X and Y are hyperparameters specifying the number of rooms or the room sizes in the environments. With larger X and Y, the environments will become more difficult to solve due to the sparse rewards. Figure 10 shows the environments (with different difficulties) in this work. In MultiRoom-NX-SY, the agent needs to navigate the green tile in the last room. If the agent successfully navigates the goal, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent. In KeyCorridor-SX-RY, the agent needs to pick up the key, use the key to open the locked door, and pick up the ball. Similarly, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent if it picks up the ball. Both environments are extremely difficult to solve using RL alone due to the sparse rewards.
B.2 MINIWORLD MAZE ENVIRONMENT
MiniWorld6 is a minimalistic 3D interior environment simulator. Similar to Minigrid, we can easily create environments with different levels. We focus on MiniWorld-Maze, where the agent is asked to navigate the goal through a procedurally-generated maze. The agent has a first-person partially observable view of the environment. This environment is extremely difficult to solve due to sparse reward, long-horizon, and the procedurally-generated nature of the environments.
6https://github.com/maximecb/gym-miniworld
In this work, we focus on MiniWorld-Maze-S5, a variant of the MiniWorld Maze environment with 5×5 tiles. Figure 11 shows the top view of the mazes in four different episodes. In each episode, the agent and the goal are initialized in the top-left and bottom-right corners of the map, respectively. In this way, we ensure that the agent and the goal are far away enough so that the agent can not easily see the goal without exploration. A random maze will be generated in each episode. There are three possible actions: (1) move forward, (2) turn left, and (3) turn right. The agent will move 0.4× tile length if moving forward, where tile length is the length of one side of a tile. The agent will rotate 22.5 degrees if turning right/left. The time budget of each episode is 600. The agent will not receive any positive reward if it can not navigate the goal under the time budget.
B.3 SPARSE MUJOCO ENVIRONMENTS
Mujoco is a physical engine for continuous control (Todorov et al., 2012). Figure 12 shows the rendering of the four MuJoCo tasks used in our work. The original MuJoCo environments use dense rewards, i.e., a well-defined reward is given in each timestep. However, in many scenarios, well-defined dense rewards may be not available. We consider a variant of MuJoCo task using only episodic reward. Specifically, the original rewards are accumulated and only an episodic reward is given in the final timestep. The agent will receive a 0 reward in all the other timesteps. This sparse setting makes the tasks much more challenging to solve. Contemporary RL algorithms will usually suffer from the sparse rewards and deliver unsatisfactory performance. We aim to use these sparse variants to study whether our RAPID can effectively capture the sparse rewards.
C ABLATIONS
Figure 13 shows the learning curves of RAPID against different ablations. Without local scores, we observe significant performance drop on challenging environments in terms of sample efficiency and final performance, i.e., on MultiRoom-N7-S8, MultiRoom-N10-S10, MultiRoom-N12-S10, KeyCorridor-S3-R3, and KeyCorridor-S4-R3. This suggests that the proposed local score plays an important role in encouraging exploration. We also observe that without buffer, the agent fails to learn a good policy. Naively using the proposed scores as intrinsic reward will harm the performance. Using the extrinsic reward and the global score to rank the episodes also contribute in the challenging environments, such as KeyCorridor-S4-R3. Therefore, the three proposed scoring methods may be complementary and play different roles in different environments.
D FULL ANALYSIS RESULTS
D.1 IMPACT OF THE NUMBER OF UPDATE STEPS
D.2 IMPACT OF BUFFER SIZE
D.3 PURE EXPLORATION ON MINIGRID
D.4 LOCAL EXPLORATION SCORE OF PURE EXPLORATION ON MINIGRID
F LEARNING CURVES WITH LARGE ROOMS
E LEARNING CURVES WITH MORE ROOMS
G ABLATION AND ANALYSIS OF MINIWORLD-MAZE
H COMPARISON WITH STORING ENTIRE EPISODES IN THE BUFFER
A variant of RAPID is to keep the entire episodes in the buffer instead of state-action pairs. The intuition of keeping the whole episode is that it is possible that a particular state-action pair may only be good in terms of exploration, in the context of the rest of the agent’s trajectory in that episode. To test this variant, we force the buffer to keep the entire episode by allowing the episode at the end of the buffer to exceed the buffer size. For example, given that the buffer size is 5000, if the length of the end episode is 160 and the number of state-action pairs in the buffer is 5120, we do not cut off the end episode and exceptionally allow the buffer to store 5120 state-action pairs. In this way, we ensure that the entire episodes are kept in the buffer. We run this experiment on MiniGrid-MultiRoom-N7-S8-v0 (Figure 25). We do not observe a clear difference in the learning curves.
I RESULTS ON PROCEDURALLY-GENERATED SWIMMER
The standard MuJoCo environments are singleton. Here, we modify the Swimmer-v2 environment to make it procedurally-generated. Specifically, we make the environment in each episode different by modifying the XML configuration file of MuJoCo engine in every new episode. We consider two environmental properties, i.e., density and viscosity. Density is used to simulate lift and drag forces, which scale quadratically with velocity. Viscosity is used to simulate viscous forces, which scale linearly with velocity. In the standard Swimmer-v2 environment, the density and the viscosity are fixed to 4000 and 0.1, respectively. In the first experiment, we uniformly sample the density in [2000, 4000] in each new episode to make it procedurally-generated, denoted as Density-Swimmer. Similarly, in the second environment, we uniformly sample the velocity in [0.1, 0.5], denoted as Velocity-Swimmer. These two variants make RL training more challenging since the agent needs to learn a policy that can generalize to different densities and velocities. Similarly, we accumulate the rewards to the final timestep of an episode to make the environments sparse. Both environments are provided in our code for reproducibility. The results are reported in Figure 26. We observe that all methods learn slower on these two variants. For example, RAPID only achieves around 190 return in Density-Swimmer, while RAPID achieves more than 200 return in Swimmer. Similarly, SIL reaches around 100 return after around 5×106 timesteps in Density-Swimmer, while it achieves around 100 return after around 3 × 106 timesteps in Swimmer. Nevertheless, RAPID can discover very good policies even in these challenging procedurally-generated settings.
J DISCUSSIONS OF ANNEALING
In our experiments, we find that annealing the updating step to 0 sometimes helps improve the sample efficiency. However, in some environments, annealing will have a negative effect on the final performance. To show the effect of annealing, we plot learning curves with or without annealing on MiniGrid-KeyCorridorS3R2-v0 and MiniGrid-MultiRoom-N12-S10-v0 in Figure 27. We observe that annealing can help in KeyCorridor-S3-R2. The possible explanation is that, after the agent navigates the goal, it has access to the reward signal. the PPO objective is sufficient to train a good policy with the extrinsic rewards. The imitation loss may harm the performance since it could be better to perform exploitation rather than exploration at this stage. However, in MultiRoom-N12S10, annealing will harm the performance. In particular, the performance drops significantly in the final stage when the updating frequency of imitation loss approaches zero. The possible reason is that MultiRoom-N12-S10 needs very deep exploration to find the goal. Even though the PPO agent can navigate the goal, it may easily suffer from insufficient exploration when it encounters a new environment. As a result, the agent needs to keep exploring the environment in order to generalize. Introducing imitation loss will ensure that the agent keeps exploring the environments. | 1. What is the main contribution of the paper, and how does it address the problem of exploration in procedurally generated environments?
2. What are the strengths of the proposed algorithm, RAPID, particularly in terms of its ability to converge faster and lead to higher performance than previous algorithms?
3. What are the weaknesses of the paper, especially regarding the utility of the global score and its ability to generalize across different environments?
4. How does the reviewer assess the applicability of RAPID to more complex environments, such as those with large or continuous state-action spaces?
5. Are there any concerns regarding the experimental setup, such as the discretization of the observation space in Mujoco experiments? | Review | Review
This paper presents RAPID, an exploration algorithm for procedurally generated environments. The paper introduces an exploration scores composed of a local and global score. The local score is computed per-episode, it is the fraction of distinct states visited during an episode, the global score keeps track of the exploratory effort of the agent over the whole training procedure. These two scores are then added to the environment reward and used to improve the agent's policy with imitation learning. RAPID is shown have a faster convergence and lead to a higher performance than previous exploration algorithms for procedurally generated environments.
I found the paper well written and easy to follow. The authors provided many explanations and figures to illustrate the behaviour of their algorithm. The experiment section was great, it included many experiments on different environments as well as an extensive ablation study to showcase the benefits of RAPID. I think the key insight of this paper is using the local score as a reward for exploration and using it with imitation learning. It seems to be a great proxy for counts in the setting of procedurally generated games in the tabular case. I was actually surprised by the performance of the local score, as it is a per-episode metric it seemed to be a weak signal and that would be hard to learn from it. I'm guessing that the ranking of episodes is key here to make the algorithm work.
On the other hand I was not convinced by the utility of the global score. First of all it is not clear if the global score is state dependent or not, I assume it is not. The authors say: "the basic building blocks (e.g., walls, doors, etc.), the number of rooms, and the size of each room will be similar. From a global view, the agent should explore the regions that are not well explored in the past", unfortunately while similar states may be encountered they will likely not be equal, a metric defined using raw counts will often not be useful. "From a global view, the agent should explore the regions that are not well explored in the past" This sentence hints at the notion that should be more clearly discussed: generalization. While the local score only quantifies exploration within a single episode, the global score tries to leverage past experience to improve exploration, however to do so as the environment changes constantly some kind of generalization is necessary. I am not sure how a metric that is not aware of the generalization can be helpful. This is also shown in the experiment section, the global score that does not appear to significantly contribute to performance except on KeyCorridor-S4-R3 (any reason why?) and it is even disabled for Mujoco experiments. Another issue is that contrary to some other algorithm like RIDE, RAPID does not easily extend to environments with large or continuous state action spaces because obtaining meaningful counts in these environments is difficult, to be fair this an issue still faced in non procedurally generated environments. It would also be nice to know how RAPID was applied to Mujoco experiments, was the observation space discretized?
Overall I lean towards acceptance as I think that the paper presented a good idea with some solid experiments even though I have doubts regarding how applicable it can be to more complex environments.
Few typos: the methods based on intrinsic rewards -> methods based on intrinsic rewards may take extremely long time -> an extremely long time exploration in sparse environments. -> environments with sparse rewards could vary in diffident environments -> different environments |
ICLR | Title
Rank the Episodes: A Simple Approach for Exploration in Procedurally-Generated Environments
Abstract
Exploration under sparse reward is a long-standing challenge of model-free reinforcement learning. The state-of-the-art methods address this challenge by introducing intrinsic rewards to encourage exploration in novel states or uncertain environment dynamics. Unfortunately, methods based on intrinsic rewards often fall short in procedurally-generated environments, where a different environment is generated in each episode so that the agent is not likely to visit the same state more than once. Motivated by how humans distinguish good exploration behaviors by looking into the entire episode, we introduce RAPID, a simple yet effective episode-level exploration method for procedurally-generated environments. RAPID regards each episode as a whole and gives an episodic exploration score from both per-episode and long-term views. Those highly scored episodes are treated as good exploration behaviors and are stored in a small ranking buffer. The agent then imitates the episodes in the buffer to reproduce the past good exploration behaviors. We demonstrate our method on several procedurally-generated MiniGrid environments, a first-person-view 3D Maze navigation task from MiniWorld, and several sparse MuJoCo tasks. The results show that RAPID significantly outperforms the state-of-the-art intrinsic reward strategies in terms of sample efficiency and final performance. The code is available at https://github.com/daochenzha/rapid.
1 INTRODUCTION
Deep reinforcement learning (RL) is widely applied in various domains (Mnih et al., 2015; Silver et al., 2016; Mnih et al., 2016; Lillicrap et al., 2015; Andrychowicz et al., 2017; Zha et al., 2019a; Liu et al., 2020). However, RL algorithms often require tremendous number of samples to achieve reasonable performance (Hessel et al., 2018). This sample efficiency issue becomes more pronounced in sparse reward environments, where the agent may take an extremely long time before bumping into a reward signal (Riedmiller et al., 2018). Thus, how to efficiently explore the environment under sparse reward remains an open challenge (Osband et al., 2019).
To address the above challenge, many exploration methods have been investigated and demonstrated to be effective on hard-exploration environments. One of the most popular techniques is to use intrinsic rewards to encourage exploration (Schmidhuber, 1991; Oudeyer & Kaplan, 2009; Guo et al., 2016; Zheng et al., 2018; Du et al., 2019). The key idea is to give intrinsic bonus based on uncertainty, e.g., assigning higher rewards to novel states (Ostrovski et al., 2017), or using the prediction error of a dynamic model as the intrinsic reward (Pathak et al., 2017). While many intrinsic reward methods have demonstrated superior performance on hard-exploration environments, such as Montezuma’s Revenge (Burda et al., 2018b) and Pitfall! (Badia et al., 2019), most of the previous studies use the same singleton environment for training and testing, i.e., the agent aims to solve the same environment in each episode. However, recent studies show that the agent trained in this way is susceptible to overfitting and may fail to generalize to even a slightly different environment (Rajeswaran et al., 2017; Zhang et al., 2018a). To deal with this issue, a few procedually-generated environments
are designed to test the generalization of RL, such as (Beattie et al., 2016; Chevalier-Boisvert et al., 2018; Nichol et al., 2018; Côté et al., 2018; Cobbe et al., 2019; Küttler et al., 2020), in which the agent aims to solve the same task, but a different environment is generated in each episode.
Unfortunately, encouraging exploration in procedually-generated environments is a very challenging task. Many intrinsic reward methods, such as count-based exploration and curiosity-driven exploration, often fall short in procedually-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020). Figure 1a shows a motivating example of why count-based exploration is less effective in procedually-generated environments. We consider a 1D grid-world, where the agent (red) needs to explore the environment to reach the goal within 7 timesteps, that is, the agent needs to move right in all the steps. While count-based exploration assigns reasonable intrinsic rewards in singleton setting, it may generate misleading intrinsic rewards in procedually-generated setting because visiting a novel state does not necessarily mean a good exploration behavior. This issue becomes more pronounced in more challenging procedually-generated environments, where the agent is not likely to visit the same state more than once. To tackle the above challenge, this work studies how to reward good exploration behaviors that can generalize to different environments.
When a human judges how well an agent explores the environment, she often views the agent from the episode-level instead of the state-level. For instance, one can easily tell whether an agent has explored a maze well by looking into the coverage rate of the current episode, even if the current environment is different from the previous ones. In Figure 1b, we can confidently tell that the episode in the right-hand-side is a good exploration behavior because the agent achieves 0.875 coverage rate. Similarly, we can safely conclude that the episode in the left-hand-side does not explore the environment well due to its low coverage rate. Motivated by this, we hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors than state-level intrinsic rewards in procedually-generated setting.
To verify our hypothesis, we propose exploration via Ranking the Episodes (RAPID). Specifically, we identify good exploration behaviors by assigning episodic exploration scores to past episodes. To efficiently learn the past good exploration behaviors, we use a ranking buffer to store the highlyscored episodes. The agent then learns to reproduce the past good exploration behaviors with imitation learning. We demonstrate that RAPID significantly outperforms the state-of-the-art intrinsic reward methods on several procedually-generated benchmarks. Moreover, we present extensive ablations and analysis for RAPID, showing that RAPID can well explore the environment even without extrinsic rewards and could be generally applied in tasks with continuous state/action spaces.
2 EXPLORATION VIA RANKING THE EPISODES
An overview of RAPID is shown in Figure 2. The key idea is to assign episodic exploration scores to past episodes and store those highly-scored episodes into a small buffer. The agent then treats the episodes in the buffer as demonstrations and learns to reproduce these episodes with imitation learning. RAPID encourages the agent to explore the environment by reproducing the past good
exploration behaviors. This section introduces how we define the episodic exploration score for procedurally-generated environments and how the proposed method can be combined with state-ofthe-art reinforcement learning agents.
2.1 EPISODIC EXPLORATION SCORE
Each episode is scored from both local and global perspectives. The local score is a per-episode view of the exploration behavior, which measures how well the current episode itself explores the environment. The global score provides a long-term and historical view of exploration, i.e., whether the current episode has explored the regions that are not well explored in the past. Additionally, we consider extrinsic reward, which is the episodic reward received from the environment. It is an external criterion of exploration since a high extrinsic reward often suggests good exploration in sparse environments. The overall episodic exploration score is obtained by the weighted sum of the above three scores. We expect that these scores can model different aspects of exploration behaviors and they are complementary in procedurally-generated environments.
Local Score. The intuition of the local exploration bonus is that the agent is expected to visit as many distinct states as possible in one episode. In our preliminary experiments on MiniGrid, we observe that many poor exploration episodes repeatedly visit the same state. As a result, the agent may get stuck in a well-known region but never reach out to unexplored regions. From a local view, we can quantify this exploration behavior based on the diversity of the visited states. Specifically, we define the local exploration score as
Slocal = Ndistinct Ntotal , (1)
where Ntotal is the total number of states in the episode, and Ndistinct is the number of distinct states in the episode. Intuitively, optimizing the local score will encourage the agent to reach out to the unexplored regions in the current episode. In an ideal case, the episodes that never visit the same state twice will receive a local bonus of 1. There could be various ways to extend the above definition to continuous state space. In this work, we empirically define the local score in continuous state space as the mean standard deviation of all the states in the episode:
Slocal =
∑l i=1 std(si)
l , (2)
where l is the dimension of the state, and std(si) is the standard deviation along the i-dimension of the states in the episode. Intuitively, this score encourages the episodes that visit diverse states.
Global Score. The global score is designed to model long-term exploration behavior. While a different environment will be generated in each episode in the procedurally-generated setting, the environments in different episodes usually share some common characteristics. For example, in the
Algorithm 1 Exploration via Ranking the Episodes (RAPID) 1: Input: Training steps S, buffer size D, RL rollout steps T 2: Initialize the policy πθ, replay buffer D 3: for iteration = 1, 2, ... until convergence do 4: Execute πθ for T timesteps 5: Update πθ with RL objective (also update value functions if any) 6: for each generated episode τ do 7: Compute episodic exploration score Sτ based on Eq. (4) 8: Give score Sτ to all the state-action pairs in τ and store them to the buffer 9: Rank the state-action pairs in D based on their exploration scores 10: if D.length > D then 11: Discard the state-action pairs with low scores so that D.length = D 12: end if 13: for step = 1, 2, .., S do 14: Sample a batch from D and train πθ using the data in D with behavior cloning 15: end for 16: end for 17: end for
MiniGrid MultiRoom environment (Figure 3a), although a different maze will be generated in a new episode, the basic building blocks (e.g., walls, doors, etc.), the number of rooms, and the size of each room will be similar. From a global view, the agent should explore the regions that are not well explored in the past. Motivated by count-based exploration, we define the global score as
Sglobal = 1
Ntotal ∑ s 1√ N(s) , (3)
where N(s) is the state count of s throughout the training process, that is, the global score is the mean count-based intrinsic reward of all the states in the episode.
Episodic Exploration Score. Suppose Sext is the total extrinsic reward of an episode received from the environment . The episodic exploration score is obtained by the weighted sum of the extrinsic reward Sext, the local score Slocal, and the global score Sglobal:
S = w0Sext + w1Slocal + w2Sglobal, (4) where w0, w1 and w2 are hyperparameters. We speculate that the three scores are complementary and the optimal w0, w1 and w2 could vary in different environments. For example, a higher w1 will give a higher weight to the per-episode view of exploration behavior. A higher w2 will focus more on whether the current episode has explored the past unexplored regions. Nevertheless, we find that a single set of w0, w1 and w2 works well across many tasks.
Comparison with Prior Work. Several papers have studied episode-level feedback to improve the sample efficiency of RL. To the best of our knowledge, the existing work mainly focuses on learning a meta-policy based on episodic rewards (Xu et al., 2018; Zha et al., 2019b), or imitating the episodes with high episodic rewards (Guo et al., 2018; Srivastava et al., 2019). However, in very sparse environments, it is not likely to receive a non-zero reward without exploration. These methods are not suitable for very sparse environments since episodic rewards could be always zero. In this work, we propose local and global episodic scores to quantify the exploration behaviors and encourage the agent to visit distinct states from per-episode and long-term views.
2.2 REPRODUCING PAST GOOD EXPLORATION BEHAVIORS WITH IMITATION LEARNING
A straightforward idea is to treat the episodic exploration score as a reward signal and assign it to the last timestamp of the episode. In this way, the agent can be updated with any reinforcement learning objective. However, this strategy is not very effective with three potential limitations. First, the reward remains sparse so that the reinforcement learning agents may not be able to effectively exploit the reward signals. Second, the intrinsic exploration bonus may introduce bias in the objective. While the local score and global score may encourage exploration at the beginning stage, they are
not true objectives and may degrade the final performance. Third, it only uses a good episode once. In fact, many past good episodes can be exploited many times to improve sample efficiency.
To overcome the above limitations, we propose to use a small ranking buffer to store past good state-action pairs and then use imitation objective to enhance exploration. Specifically, we assign the same episodic exploration score to all the state-action pairs in an episode and rank all the stateaction pairs based on scores. In addition to the original RL objective, we employ behavior cloning, the simplest form of imitation learning, to encourage the agent to reproduce the state-action pairs in the buffer. This simple ranking buffer can effectively exploit the past good experiences since a good state-action pair could be sampled more than once. The procedure is summarized in Algorithm 1. A possible variant is to keep the entire episodes in the buffer. We have tried this idea and do not observe a clear difference between this variant and Algorithm 1 (see Appendix H for more discussions).
3 EXPERIMENTS
The experiments are designed to answer the following research questions: RQ1: how is RAPID compared with the state-of-the-art intrinsic reward methods on benchmark procedurally-generated environments (Section 3.2)? RQ2: how will RAPID perform if removing one of the scores (i.e., the local score, the global score, and the extrinsic reward) or the buffer (Section 3.3)? RQ3: how will the hyperparameters impact the performance of RAPID (Section 3.3)? RQ4: can RAPID explore the environment without extrinsic reward (Section 3.3)? RQ5: how will RAPID perform with larger and more challenging grid-world (Section 3.3)? RQ6: can RAPID generalize to 3D navigation task and continuous state/action space, such as MuJoCo (Section 3.4)?
3.1 ENVIRONMENTS AND BASELINES
Figure 3 shows the rendering of procedurally-generated environments used in this work. Following the previous studies of exploration in procedurally-generated environments (Raileanu & Rocktäschel, 2020; Campero et al., 2020), we mainly focus on MiniGrid environments (ChevalierBoisvert et al., 2018), which provide procedurally-generated grid-worlds. We consider two types of hard exploration tasks: MultiRoom-NX-SY, where X and Y indicate the number of rooms and the room size, respectively, and KeyCorridor-SX-RY, where X and Y indicate the room size and the number of rows, respectively. A large X or Y suggests a larger environment, which will be more difficult to explore. The agent has a partial view of the environment and needs to navigate the green block or the red ball under sparse rewards. These tasks are challenging because the agent needs to explore many rooms in order to receive a reward signal so that standard RL algorithms will typically fail. To test whether RAPID can generalize to continuous state space, we consider MiniWorld Maze (Chevalier-Boisvert, 2018), whose observation space is 60 × 80 images. Similarly, the agent in MiniWorld Maze has a 3D partial view of the environment and needs to navigate the red box. The maze is procedurally-generated, i.e., a different maze will be generated in each episode. We further study RAPID in continuous state and action spaces on sparse MuJoCo tasks (Todorov et al., 2012; Brockman et al., 2016). More details of the above environments are described in Appendx B.
While exploration has been extensively studied in the literature, very few methods are designed for procedurally-generated environments. To better understand the performance of RAPID, we consider the baselines in the following categories. Traditional Methods: we consider several popular methods designed for singleton environments, including count-based exploration (COUNT) (Bellemare et al., 2016), Random Network Distillation Exploration (RANDOM) (Burda et al., 2018b),
.
and curiosity-driven exploration (CURIOSITY) (Pathak et al., 2017). Methods for ProcedurallyGenerated Environments: we consider two state-of-the-art exploration methods, including ImpactDriven Exploration (RIDE) (Raileanu & Rocktäschel, 2020), and exploration with Adversarially Motivated Intrinsic Goals (AMIGO) (Campero et al., 2020). Self-Imitation Methods: self-imitation learning (SIL) (Oh et al., 2018) also exploits past good experiences to encourage exploration. We further include PPO (Schulman et al., 2017) as a reinforcement learning baseline. All the algorithms are run the same number timesteps for a fair comparison. More details are provided in Appendix A.
3.2 MINIGRID RESULTS
To study RQ1, we plot the learning curves of RAPID and the baselines on MiniGrid benchmarks in Figure 4. We can make three observations. First, RAPID performs significantly better than the state-of-the-art methods designed for singleton or procedurally-generated environments in terms of sample efficiency and final performance. For example, on MultiRoom-N7-S8, RAPID is at least 10 times more sample efficient than RIDE. On KeyCorridor-S4-R3, RAPID is the only algorithm that successfully navigates the goal within 20 million timesteps. Second, the traditional intrinsic reward methods perform well on easy procedurally-generated environments but perform poorly on hardexploration tasks. For example, COUNT performs as well as RAPID on KeyCorridor-S3-R2 but fails on more challenging tasks, such as MultiRoom-N7-S8. This suggests that traditional methods may fall short in challenging procedurally-generated environments, which is consistent with the results in previous work (Raileanu & Rocktäschel, 2020). Third, RAPID outperforms SIL across all the tasks. Recall that both RAPID and SIL learn to reproduce past good experiences. While SIL reproduces experiences with high rewards, RAPID additionally considers local and global exploration scores. This verifies the effectiveness of the episodic exploration scores.
3.3 ABLATIONS AND ANALYSIS
To study RQ2, we perform ablation studies on RAPID. Specifically, we remove one of the local score, the global score, and the extrinsic reward, and compare each ablation with RAPID. Besides, we consider a variant that directly uses the episodic exploration score as the reward signal without buffer. To demonstrate the effectiveness of the ranking, we further include a baseline that uses the replay buffer without ranking. Table 1 shows the results. We observe that the local score contributes the most to the performance. While most existing exploration methods reward exploration globally, the global score could be not aware of generalization, which may explain the unsatisfactory performance of COUNT and CURIOSITY. While the global score seems to play a relatively small role as compared to local score in most MiniGrid environments, it plays an important role in KeyCorridorS4-R3. A possible reason is that the local score is a too weak signal in KeyCorridor-S4-R3, and the global score may help alleviate this issue since it provides some historical information. We also observe that the agent fails in almost all the tasks without buffer or ranking. This suggests that the ranking buffer is an effective way to exploit past good exploration behaviors.
To study RQ3, we plot the learning curves with different training steps S and buffer size D in Figure 5a and 5b. We observe that larger number of training steps usually leads to faster learning speed at the beginning stage but may result in sub-optimal performance. For the buffer size, we observe that a large buffer will lead to slow learning speed with more stable performance. We speculate that there is a trade-off between learning speed and the final performance. In our experiments, we set training steps S to be 5 and buffer size D to be 10000, respectively, across all the tasks.
To investigate RQ4, we remove extrinsic rewards and show the learning curves of RAPID and the baselines in Figure 5c with the corresponding local exploration scores in Figure 5d. We make two interesting observations. First, without extrinsic rewards, RAPID can deliver reasonable performance. For example, on MultiRoom-N12-S10, RAPID achieves nearly 0.5 average return with pure exploration, compared with around 0.6 if using extrinsic rewards. We look into the episodes generated by RAPID with pure exploration and observe that, in most cases, the agent is able to reach the goal but struggles to find the optimal path to the goal. This is expected because the extrinsic reward is obtained by whether the agent can reach the goal subtracting some penalties of the timesteps spent. The extrinsic reward can encourage the agent to choose the shortest path to reach the goal. For other baselines, we observe that only RIDE and AMIGO can occasionally reach the goal. Second, we observe that some baselines, such as RIDE, are indirectly maximizing the local exploration scores. For example, we observe that the local exploration score increases for RIDE on MultiRoom-N12S10. However, the local score of RAPID drops after about 10 million timesteps. This suggests the episode-level score may better align with good exploration behaviors. A possible explanation for the superiority of RAPID is that it is designed to directly optimize episodic exploration score.
To answer RQ5, we focus on MultiRoom-NX-SY and ambitiously vary X and Y to make the environments more challenging. Specifically, we fix Y to be 4 and vary X in the left-hand-side of Figure 6, and fix X to be 4 and vary Y in the middle of Figure 6. Compared with RIDE, our RAPID delivers stronger performance on more challenging mini-worlds.
3.4 MINIWORLD MAZE AND SPARSE MUJOCO RESULTS
For RQ6, the learning curves on 3D Maze are shown in the right-hand-side of Figure 6. RAPID performs significantly better than the baselines. We also conduct experiments on a subset of sparse MuJoCo tasks (Figure 7). The results show that RAPID improves over PPO and SIL. Interestingly, RAPID achieves more than 200 average return on Swimmer, which surpasses some previous results reported on the dense reward counterpart (Duan et al., 2016; Henderson et al., 2018; Lai et al., 2020). Note that the MuJoCo environments are singleton. We use these environments solely to test whether RAPID can deal with continuous state/action spaces. We have also tried adapting Swimmer to be procedurally-generated and observe similar results (Appendix I)
4 RELATED WORK
Count-Based and Curiosity-Driven Exploration. These two classes of intrinsic motivations are the most popular and proven to be effective in the literature. Count-based exploration encourages the agent to visit the novel states, which is first studied in tabular setting (Strehl & Littman, 2008) and has been recently extended to more complex state space (Bellemare et al., 2016; Ostrovski et al., 2017; Tang et al., 2017; Martin et al., 2017; Machado et al., 2018; Osband et al., 2019). Curiositydriven exploration learns the environment dynamics to encourage exploration (Stadie et al., 2015; Pathak et al., 2017; Sukhbaatar et al., 2018; Burda et al., 2018a). However, these methods are designed for singleton environments and often struggle to generalize to procedurally-generated environments (Raileanu & Rocktäschel, 2020).
Exploration for Procedurally-Generated Environments. Several recent studies have discussed the generalization of reinforcement learning (Rajeswaran et al., 2017; Zhang et al., 2018a;b; Choi et al., 2018) and designed procedurally-generated environments to test the generalization of reinforcement learning (Beattie et al., 2016; Nichol et al., 2018; Küttler et al., 2020). More recent papers show that traditional exploration methods fall short in procedurally-generated environments and address this issue with new exploration methods (Raileanu & Rocktäschel, 2020; Campero et al., 2020). This work studies a new perspective of exploration bonus in episode-level and achieves significantly better performance than the previous methods on procedurally-generated benchmarks.
Experience Replay Buffer and Self-Imitation Learning. Experience replay mechanism is widely used and is initially designed to stabilize deep reinforcement learning (Lin, 1992; 1993; Mnih et al., 2015; Zhang & Sutton, 2017; Zha et al., 2019b; Fedus et al., 2020). Self-Imitation Learning extends the experience replay buffer and uses it to store past good experiences (Oh et al., 2018; Guo et al., 2018; 2019; Gangwani et al., 2018). However, Self-Imitation only exploits highly rewarded experiences and thus is not suitable for very sparse environments. Our work incorporates local and global scores to encourage past good exploration behaviors in sparse procedurally-generated environments.
Episodic Memory. Inspired by the episodic memory of the animals, some studies propose to use episodic memory to enhance sample efficiency (Blundell et al., 2016; Pritzel et al., 2017). The key idea is to memorize and replay the good episodic experiences. More recently, episodic mem-
ory is used to generate intrinsic rewards to guide exploration (Savinov et al., 2018). Never Give Up (NGU) (Badia et al., 2019) further considers local and global novelties in the intrinsic rewards. Our work differs from NGU in that we approach the sample efficiency from a different angle by treating each episode as a whole and using a ranking mechanism to select good experiences. Thus, good experiences can be exploited multiple times to improve sample efficiency.
5 LIMITATIONS AND DISCUSSIONS
RAPID is designed to study our hypothesis that imitating episode-level good exploration behaviors can encourage exploration in procedurally-generated environments. In the presented algorithm, we have made some simple choices. In order to make RAPID more competitive to the state-of-the-art methods in more complicated environments, in this section, we highlight some limitations of RAPID and the potential research directions.
First, in our work, the local and global scores are mainly calculated on low-dimensional state space. We have not yet tested the scores on more complex procedurally-generated environments, such as ProcGen benchmark (Cobbe et al., 2019), TextWorld (Côté et al., 2018), and NetHack Learning Environment (Küttler et al., 2020), some of which require high-dimensional image inputs. In this context, count-based scores may not be well-suited. Learning the state embedding (Raileanu & Rocktäschel, 2020) and pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017) could be the potential ways to address this issue. Moreover, high extrinsic rewards may not necessarily mean good exploration behaviors in these environments. The applicability of RAPID in these environments needs future research efforts.
Second, the ranking buffer greedily selects the highly-reward state-action pairs, which is the key to boosting the performance. However, with this greedy strategy, some highly-ranked state-action pairs may never be forgotten even if they are out-of-date, which may introduce bias. As shown in Figure 5b, a larger buffer size will lead to more stable performance but will sacrifice the sample efficiency. In more complicated environments, selecting the proper buffer size could be non-trivial. Some forgetting mechanism (Novati & Koumoutsakos, 2019) and meta-learning strategies (Zha et al., 2019b) could be the potential directions to better manage the buffer.
Third, in our implementation, we employ behavior cloning, the simplest form of imitation learning, to train the policy. In recent years, many more advanced imitation learning methods have been proposed (Ho & Ermon, 2016; Hester et al., 2017). In particular, the idea of inverse reinforcement learning (Abbeel & Ng, 2004) is to learn reward functions based on the demonstrations. An interesting future direction is to study whether we can derive intrinsic rewards based on the experiences in the buffer via inverse reinforcement learning. Specifically, is there a connection between RAPID and intrinsic rewards from the view of inverse reinforcement learning? Can RAPID be combined with intrinsic rewards to improve the performance? We hope that RAPID can facilitate future research to understand these questions.
6 CONCLUSIONS AND FUTURE WORK
This work studies how we can reward good exploration behaviors in procedurally-generated environments. We hypothesize that episode-level exploration score could be a more general criterion to distinguish good exploration behaviors. To verify our hypothesis, we design RAPID, a practical algorithm that learns to reproduce past good exploration behaviors with a ranking buffer. Experiments on benchmark hard-exploration procedurally-generated environments show that RAPID significantly outperforms state-of-the-art intrinsic reward methods in terms of sample efficiency and final performance. Further analysis shows that RAPID has robust performance across different configurations, even without extrinsic rewards. Moreover, experiments on a 3D navigation task and a subset of sparse MuJoCo tasks show that RAPID could generalize to continuous state/action spaces, showing the promise in rewarding good episode-level exploration behaviors to encourage exploration. In the future, we will extend our idea and test it in more challenging procedurally-generated environments. We will also explore the possibility of combining the episode-level exploration bonus with intrinsic rewards. Last, we will try other imitation learning techniques beyond behavior cloning.
A HYPERPARAMETERS AND NEURAL NETWORK ARCHITECTURE
For the proposed RAPID, the common hyperparameters are summarized in Table 2. RAPID is based on the PPO implementation from OpenAI baselines1. The nstep is 128 for all MiniGrid environments and MuJoCo environments, and 512 for MiniWorld-Maze-S5. For MuJoCo environments, we report the results with only imitation loss since we find that the policy loss of PPO will harm the performance in the episodic reward setting. The learning rate is 10−4 for all MiniGrid environments and MiniWorld-Maze-S5, and 5× 10−4 for MuJoCo environments. Since MiniWorld-Maze-S5 and MuJoCo environments have continuous state space, we set w2 = 0. In practice, in MiniWorldMaze-S5, we observe that counting the states will usually lead to memory error because it is not likely to encounter the same state again in MiniWorld-Maze-S5 (COUNT is excluded in comparison due to memory issues). Note that it is possible to use pseudo-counts (Bellemare et al., 2016; Ostrovski et al., 2017), which we will study in our future work. For MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0 and MiniWorld-Maze-S5, the update frequency of imitation learning is linearly annealed to 0 throughout the training process. For other environments, the imitation learning is performed after each episode. We keep all other hyperparameters of PPO as default and use the default CNN architecture provided in OpenAI baselines for MiniWorld-Maze-S5 and MLP with 64-64 for other environments. The PPO also uses the same hyperparameters. We summarize the state space of each environment and how the local and global scores are computed in Table 3.
For RANDOM, we implement the two networks with 64-64 MLP. For CURIOSITY and RIDE, we use 64-64 MLP for state embedding model, forward dynamics model and inverse dynamics model, respectively. However, with extensive hyperparameters search, we are not able to reproduce the RIDE results in MiniGrid environments even with the exactly same neural architecture and hyperparameters as suggested in (Raileanu & Rocktäschel, 2020). Thus, we use the authors’ implementation2 (Raileanu & Rocktäschel, 2020), which is well-tuned on MiniGrid tasks. Our reported result
1https://github.com/openai/baselines 2https://github.com/facebookresearch/impact-driven-exploration
is on par with that reported in (Raileanu & Rocktäschel, 2020). For SIL and AMIGO, we use the implementations from the authors34 with the recommended hyperparameters.
For the methods based on intrinsic rewards, i.e., RIDE,, AMIGO CURIOSITY, RANDOM, and COUNT, we search intrinsic reward coefficient from {0.1, 0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}. For RIDE, the intrinsic reward coefficient is 0.1 for MiniGrid-KeyCorridorS2R3-v0, MiniGridKeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.5 for MiniGrid-MultiRoomN7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGrid-MultiRoom-N12S10-v0. For AMIGO and CURIOSITY, the intrinsic reward coefficient is 0.1 for all the environments. For COUNT, the intrinsic reward coefficient is 0.005 for all the environments. For RANDOM, the intrinsic reward coefficient is 0.001 for all the environemnts. We also tune the entropy coefficient of RIDE from {0.001, 0.0005}, as suggested by the original paper. The entropy coefficient is 0.0005 for MiniGrid-KeyCorridorS2R3-v0, MiniGrid-KeyCorridorS3R3-v0, MiniGrid-KeyCorridorS4R3-v0, MiniGrid-MultiRoom-N10S4-v0, MiniGrid-MultiRoom-N7S4-v0 and MiniWorld-Maze-S5, and 0.001 for MiniGrid-MultiRoom-N7S8-v0, MiniGrid-MultiRoom-N10S10-v0 and MiniGridMultiRoom-N12S10-v0.
The w0, w1, and w2 are selected as follows. We first run RAPID on MiniGrid-MultiRoom-N7S8v0. We choose this environment because it is not too hard nor too easy, so we expect that it is representative among all the considered MiniGrid environments. We fix w0 to be 1, and select w1 and w2 from {10−4, 10−3, 10−2, 10−1, 100, 101, 102, 103, 104}, that is, there are 9 × 9 = 81 combinations. Note that we can fix w0 because only the relative ranking matters in the ranking buffer. The selected w1 and w2 are then directly used in other environments without tuning. We further conduct a sensitivity analysis on MiniGrid-MultiRoom-N7S8-v0 in Figure 8. We observe that RAPID has good performance in a very wide range of hyperparameter choices.
B ENVIRONMENTS
B.1 MINIGRID ENVIRONMENTS
MiniGrid5 is a suite of light-weighted and fast gridworld environments with OpenAI gym interfaces. The environments are partially observable with a grid size of N ×N . Each tile in the gird can have at most one object, that is, 0 or 1 object. The possible objects are wall, floor, lava, door, key, ball, box and goal. Each object will have an associated color. The agent can pick up at most one object (ball or key). To open a locked door, the agent must use a key. Most of the environments in
3https://github.com/junhyukoh/self-imitation-learning 4https://github.com/facebookresearch/adversarially-motivated-intrinsic-goals 5https://github.com/maximecb/gym-minigrid
MiniGrid are procedurally-generated, i.e. a different grid will be sampled in each episode. Figure 9 shows the grids in four different episodes. The procedurally-generated nature makes training RL challenging because the RL agents need to learn the skills that can generalize to different grids. The environments can be easily modified so that we can create different levels of the environments. For example, we can modify the number of rooms and the size of each room to create hard-exploration or easy-exploration environments. The flexibility of MiniGrid environments enables us to conduct systematic comparison of algorithms under different difficulties.
The original observations of MiniGrid are dictionaries, which consist of a image field providing a partially observable view of the environment, and a mission field describing the goal with texts. In our experiments, we use ImgObsWrapper which only keeps the image field. The environment uses a compact encoding with 3 input values per visible grid cell. Note that the cells behind the wall or unopened doors are invisible.
The possible actions in MiniGrid are (1) turn left, (2) turn right, (3) move forward, (4) pick up an object, (5) drop the carried object, (6) open doors/interact with objects, and (7) done. The agent will remain in the same state if the action is not legal. For example, if there is no object in front of the agent, the agent will remain in the same state when performing action (4) pick up an object.
We focus on MultiRoom-NX-SY and KeyCorridor-SX-RY, where X and Y are hyperparameters specifying the number of rooms or the room sizes in the environments. With larger X and Y, the environments will become more difficult to solve due to the sparse rewards. Figure 10 shows the environments (with different difficulties) in this work. In MultiRoom-NX-SY, the agent needs to navigate the green tile in the last room. If the agent successfully navigates the goal, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent. In KeyCorridor-SX-RY, the agent needs to pick up the key, use the key to open the locked door, and pick up the ball. Similarly, the agent will receive a reward of 1 subtracting some penalties of the timesteps spent if it picks up the ball. Both environments are extremely difficult to solve using RL alone due to the sparse rewards.
B.2 MINIWORLD MAZE ENVIRONMENT
MiniWorld6 is a minimalistic 3D interior environment simulator. Similar to Minigrid, we can easily create environments with different levels. We focus on MiniWorld-Maze, where the agent is asked to navigate the goal through a procedurally-generated maze. The agent has a first-person partially observable view of the environment. This environment is extremely difficult to solve due to sparse reward, long-horizon, and the procedurally-generated nature of the environments.
6https://github.com/maximecb/gym-miniworld
In this work, we focus on MiniWorld-Maze-S5, a variant of the MiniWorld Maze environment with 5×5 tiles. Figure 11 shows the top view of the mazes in four different episodes. In each episode, the agent and the goal are initialized in the top-left and bottom-right corners of the map, respectively. In this way, we ensure that the agent and the goal are far away enough so that the agent can not easily see the goal without exploration. A random maze will be generated in each episode. There are three possible actions: (1) move forward, (2) turn left, and (3) turn right. The agent will move 0.4× tile length if moving forward, where tile length is the length of one side of a tile. The agent will rotate 22.5 degrees if turning right/left. The time budget of each episode is 600. The agent will not receive any positive reward if it can not navigate the goal under the time budget.
B.3 SPARSE MUJOCO ENVIRONMENTS
Mujoco is a physical engine for continuous control (Todorov et al., 2012). Figure 12 shows the rendering of the four MuJoCo tasks used in our work. The original MuJoCo environments use dense rewards, i.e., a well-defined reward is given in each timestep. However, in many scenarios, well-defined dense rewards may be not available. We consider a variant of MuJoCo task using only episodic reward. Specifically, the original rewards are accumulated and only an episodic reward is given in the final timestep. The agent will receive a 0 reward in all the other timesteps. This sparse setting makes the tasks much more challenging to solve. Contemporary RL algorithms will usually suffer from the sparse rewards and deliver unsatisfactory performance. We aim to use these sparse variants to study whether our RAPID can effectively capture the sparse rewards.
C ABLATIONS
Figure 13 shows the learning curves of RAPID against different ablations. Without local scores, we observe significant performance drop on challenging environments in terms of sample efficiency and final performance, i.e., on MultiRoom-N7-S8, MultiRoom-N10-S10, MultiRoom-N12-S10, KeyCorridor-S3-R3, and KeyCorridor-S4-R3. This suggests that the proposed local score plays an important role in encouraging exploration. We also observe that without buffer, the agent fails to learn a good policy. Naively using the proposed scores as intrinsic reward will harm the performance. Using the extrinsic reward and the global score to rank the episodes also contribute in the challenging environments, such as KeyCorridor-S4-R3. Therefore, the three proposed scoring methods may be complementary and play different roles in different environments.
D FULL ANALYSIS RESULTS
D.1 IMPACT OF THE NUMBER OF UPDATE STEPS
D.2 IMPACT OF BUFFER SIZE
D.3 PURE EXPLORATION ON MINIGRID
D.4 LOCAL EXPLORATION SCORE OF PURE EXPLORATION ON MINIGRID
F LEARNING CURVES WITH LARGE ROOMS
E LEARNING CURVES WITH MORE ROOMS
G ABLATION AND ANALYSIS OF MINIWORLD-MAZE
H COMPARISON WITH STORING ENTIRE EPISODES IN THE BUFFER
A variant of RAPID is to keep the entire episodes in the buffer instead of state-action pairs. The intuition of keeping the whole episode is that it is possible that a particular state-action pair may only be good in terms of exploration, in the context of the rest of the agent’s trajectory in that episode. To test this variant, we force the buffer to keep the entire episode by allowing the episode at the end of the buffer to exceed the buffer size. For example, given that the buffer size is 5000, if the length of the end episode is 160 and the number of state-action pairs in the buffer is 5120, we do not cut off the end episode and exceptionally allow the buffer to store 5120 state-action pairs. In this way, we ensure that the entire episodes are kept in the buffer. We run this experiment on MiniGrid-MultiRoom-N7-S8-v0 (Figure 25). We do not observe a clear difference in the learning curves.
I RESULTS ON PROCEDURALLY-GENERATED SWIMMER
The standard MuJoCo environments are singleton. Here, we modify the Swimmer-v2 environment to make it procedurally-generated. Specifically, we make the environment in each episode different by modifying the XML configuration file of MuJoCo engine in every new episode. We consider two environmental properties, i.e., density and viscosity. Density is used to simulate lift and drag forces, which scale quadratically with velocity. Viscosity is used to simulate viscous forces, which scale linearly with velocity. In the standard Swimmer-v2 environment, the density and the viscosity are fixed to 4000 and 0.1, respectively. In the first experiment, we uniformly sample the density in [2000, 4000] in each new episode to make it procedurally-generated, denoted as Density-Swimmer. Similarly, in the second environment, we uniformly sample the velocity in [0.1, 0.5], denoted as Velocity-Swimmer. These two variants make RL training more challenging since the agent needs to learn a policy that can generalize to different densities and velocities. Similarly, we accumulate the rewards to the final timestep of an episode to make the environments sparse. Both environments are provided in our code for reproducibility. The results are reported in Figure 26. We observe that all methods learn slower on these two variants. For example, RAPID only achieves around 190 return in Density-Swimmer, while RAPID achieves more than 200 return in Swimmer. Similarly, SIL reaches around 100 return after around 5×106 timesteps in Density-Swimmer, while it achieves around 100 return after around 3 × 106 timesteps in Swimmer. Nevertheless, RAPID can discover very good policies even in these challenging procedurally-generated settings.
J DISCUSSIONS OF ANNEALING
In our experiments, we find that annealing the updating step to 0 sometimes helps improve the sample efficiency. However, in some environments, annealing will have a negative effect on the final performance. To show the effect of annealing, we plot learning curves with or without annealing on MiniGrid-KeyCorridorS3R2-v0 and MiniGrid-MultiRoom-N12-S10-v0 in Figure 27. We observe that annealing can help in KeyCorridor-S3-R2. The possible explanation is that, after the agent navigates the goal, it has access to the reward signal. the PPO objective is sufficient to train a good policy with the extrinsic rewards. The imitation loss may harm the performance since it could be better to perform exploitation rather than exploration at this stage. However, in MultiRoom-N12S10, annealing will harm the performance. In particular, the performance drops significantly in the final stage when the updating frequency of imitation loss approaches zero. The possible reason is that MultiRoom-N12-S10 needs very deep exploration to find the goal. Even though the PPO agent can navigate the goal, it may easily suffer from insufficient exploration when it encounters a new environment. As a result, the agent needs to keep exploring the environment in order to generalize. Introducing imitation loss will ensure that the agent keeps exploring the environments. | 1. What is the focus and contribution of the paper regarding exploration methods?
2. What are the strengths and weaknesses of the proposed RAPID method, particularly in its exploration scores and episodic exploration score calculation?
3. Do you have any concerns or questions about the experiments conducted in the paper, such as the handling of partial observability, continuous state space, weight determination, replay buffer comparison, and task explanations?
4. How does the reviewer assess the effectiveness and intuition of the proposed method in very sparse and procedurally-generated environments?
5. Are there any questions or concerns regarding the ablation study and the necessity for more intuitive verification of the method's components? | Review | Review
This paper presents an exploration method for procedurally-generated environments, RAPID, which imitates the past episodes that have a good exploration behavior. First, authors introduce exploration scores, local score for per-episode view of the exploration behavior, and global score for long-term and historical view of exploration. The authors use the weighted sum of these two exploration scores and extrinsic reward as a final episodic exploration score. They rank the state-action pairs based on episodic exploration score and train the agent to imitate behaviors with high score. In experiments, they show the results by comparing state-of-the-art algorithms in several procedurally-generated environments.
This paper is well written, but I have following concerns and questions about the paper.
MiniGrid seems to be a partially observable problem in which only 7x7 observation is given to the agent. What does the state used when calculating the exploration score in the paper mean?
Moreover, the paper also includes experiments on problems with continuous state space. In that case, I wonder the details of how the number of states and distinct states are handled.
When calculating the episodic exploration score (Equation (3)), it is thought that the results vary greatly depending on the weight
w
0
-
w
2
values. In the paper, it seems necessary to show the criteria for determining the weights and the difference in results according to the weights.
In order to show the effectiveness of the ranking buffer, in addition to show with/without results as shown in Table 1, the results when using the replay buffer without considering the ranking should be compared together.
It would be better if the MuJoCo tasks in Figure 7 could be explained more clearly. (ex. The existing MuJoCo would be a singleton, but what was changed to make it a procedurally-generated environment?)
In the experiment, the explanation is mainly based on KeyCorridor, but there is no explanation for what task it is. It would be better if a brief explanation of the KeyCorridor task is included in the paper (with the reason why this task is hard).
The author claims that the proposed method is effective in very sparse environments, but I am not sure exactly what makes it effective in very sparse environments. The justification for this claim needs to be explained more intuitively.
This paper contains experimental results for various tasks, but It is unclear the contribution of this paper and the reason why the proposed method is effective in very sparse and procedurally-generated environments. It seems necessary to explain intuitive motivation more clearly in the paper. Moreover, for the ablation study, It also needs precise verification that can show the effect of each component more intuitively, not ablation that simply separates and compares each component. |
ICLR | Title
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning
Abstract
Understanding human behavior from observed data is critical for transparency and accountability in decision-making. Consider real-world settings such as healthcare, in which modeling a decision-maker’s policy is challenging—with no access to underlying states, no knowledge of environment dynamics, and no allowance for live experimentation. We desire learning a data-driven representation of decisionmaking behavior that (1) inheres transparency by design, (2) accommodates partial observability, and (3) operates completely offline. To satisfy these key criteria, we propose a novel model-based Bayesian method for interpretable policy learning (“INTERPOLE”) that jointly estimates an agent’s (possibly biased) belief-update process together with their (possibly suboptimal) belief-action mapping. Through experiments on both simulated and real-world data for the problem of Alzheimer’s disease diagnosis, we illustrate the potential of our approach as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
1 INTRODUCTION
A principal challenge in modeling human behavior is in obtaining a transparent understanding of decision-making. In medical diagnosis, for instance, there is often significant regional and institutional variation in clinical practice [1], much of it the leading cause of rising healthcare costs [2]. The ability to quantify different decision processes is the first step towards a more systematic understanding of medical practice. Purely by observing demonstrated behavior, our principal objective is to answer the question: Under any given state of affairs, what actions are (more/less) likely to be taken, and why?
We address this challenge by setting our sights on three key criteria. First, we desire a method that is transparent by design. Specifically, a transparent description of behavior should locate the factors that contribute to individual decisions, in a language readily understood by domain experts [3, 4]. This will be clearer per our subsequent formalism, but we can already note some contrasts: Classical imitation learning—popularly by reduction to supervised classification—does not fit the bill, since black-box hidden states of RNNs are rarely amenable to meaningful interpretation. Similarly, apprenticeship learning algorithms—popularly through inverse reinforcement learning—do not satisfy either, since the high-level nature of reward mappings is not informative as to individual actions observed in the data. Rather than focusing purely on replicating actions (imitation learning) or on matching expert performance (apprenticeship learning), our chief pursuit lies in understanding demonstrated behavior.
Second, real-world environments such as healthcare are often partially observable in nature. This requires modeling the accumulation of information from entire sequences of past observations—an endeavor that is prima facie at odds with the goal of transparency. For instance, in a fully-observable setting, (model-free) behavioral cloning is arguably ‘transparent’ in providing simple mappings of states to actions; however, coping with partial observability using any form of recurrent function ∗Authors contributed equally
approximation immediately lands in black-box territory. Likewise, while (model-based) methods have been developed for robotic control, their transparency crucially hinges on fully-observable kinematics.
Finally, in realistic settings it is often impossible to experiment online—especially in high-stakes environments with real products and patients. The vast majority of recent work in (inverse) reinforcement learning has focused on games, simulations, and gym environments where access to live interaction is unrestricted. By contrast, in healthcare settings the environment dynamics are neither known a priori, nor estimable by repeated exploration. We want a data-driven representation of behavior that is learnable in a completely offline fashion, yet does not rely on knowing/modeling any true dynamics.
Contributions Our contributions are three-fold. First, we propose a model for interpretable policy learning (“INTERPOLE”)—where sequential observations are aggregated through a decision agent’s decision dynamics (viz. subjective belief-update process), and sequential actions are determined by the agent’s decision boundaries (viz. probabilistic belief-action mapping). Second, we suggest a Bayesian learning algorithm for estimating the model, simultaneously satisfying the key criteria of transparency, partial observability, and offline learning. Third, through experiments on both simulated and real-world data for Alzheimer’s disease diagnosis, we illustrate the potential of our method as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
2 RELATED WORK
We seek to learn an interpretable parameterization of observed behavior to understand an agent’s actions. Fundamentally, this contrasts with imitation learning (which seeks to best replicate demonstrated policies) and apprenticeship learning (which seeks to match some notion of performance).
Imitation Learning In fully-observable settings, behavior cloning (BC) readily reduces the imitation problem to one of supervised classification [5, 11–13]; i.e. actions are simply regressed on observations. While this can be extended to account for partial observability by parameterizing policies via recurrent function approximation [14], it immediately gives up on ease of interpretability per the black-box nature of RNN hidden states. A plethora of model-free techniques have recently been developed, which account for information in the rollout dynamics of the environment during policy learning (see e.g. [15–20])—most famously, generative adversarial imitation learning (GAIL) based on statedistribution matching [6, 21]. However, such methods require repeated online rollouts of intermediate policies during training, and also face the same black-box problem as BC in partially observable settings. Clearly in model-free imitation, it is difficult to admit both transparency and partial observability.
Specifically with an eye on explainability, Info-GAIL [22, 23] proposes an orthogonal notion of “interpretability” that hinges on clustering similar demonstrations to explain variations in behavior. However, as with GAIL it suffers from the need for live interaction for learning. Finally, several model-based techniques for imitation learning (MB-IL) have been studied in the domain of robotics. [24] consider kinematic models designed for robot dynamics, while [25] and [7] consider (non-)linear autoregressive exogenous models. However, such approaches invariably operate in fully-observable settings, and are restricted models hand-crafted for specific robotic applications under consideration.
Apprenticeship Learning In subtle distinction to imitation learning, methods in apprenticeship learning assume the observed behavior is optimal with respect to some underlying reward function. Apprenticeship thus proceeds indirectly—often through inverse reinforcement learning (IRL) in order to infer a reward function, which (with appropriate optimization) generates learned behavior that matches the performance of the original—as measured by the rewards (see e.g. [8, 26–29]). These approaches have been variously extended to cope with partial observability (PO-IRL) [9, 30], to offline settings through off-policy evaluation [31–33], as well as to learned environment models [10].
However, a shortcoming of such methods is the requirement that the demonstrated policy in fact be optimal with respect to a true reward function that lies within an (often limited) hypothesis class under consideration—or is otherwise black-box in nature. Further, learning the true environment dynamics [10] corresponds to the requirement that policies be restricted to the class of functions that map from unbiased beliefs (cf. exact inference) into actions. Notably though, [34] considers both a form of suboptimality caused by time-inconsistent agents as well as biased beliefs. However, perhaps most importantly, due to the indirect, task-level nature of reward functions, inverse reinforcement learning is essentially opposed to our central goal of transparency—that is, in providing direct, action-level descriptions of behavior. In Section 5, we provide empirical evidence of this notion of interpretability.
Towards INTERPOLE In contrast, we avoid making any assumptions as to either unbiasedness of beliefs or optimality of policies. After all, the former requires estimating (externally) “true” environment dynamics, and the latter requires specifying (objectively) “true” classes of reward functions— neither of which are necessary per our goal of transparently describing individual actions. Instead, INTERPOLE simply seeks the most plausible explanation in terms of (internal) decision dynamics and (subjective) decision boundaries. To the best of our knowledge, our work is the first to tackle all three key criteria—while making no assumptions on the generative process behind behaviors. Table 1 contextualizes our work, showing typical incarnations of related approaches and their graphical models.
Before continuing, we note that the separation between the internal dynamics of an agent and the external dynamics of the environment has been considered in several other works, though often for entirely different problem formulations. Most notably, [35] tackles the same policy learning problem as we do in online, fully-observable environments but for agent’s with internal states that cannot be observed. They propose agent Markov models (AMMs) to model such environment-agent interactions. For problems other than policy learning, [36–38] also consider the subproblem of inferring an agent’s internal dynamics; however, none of these works satisfy all three key criteria simultaneously as we do.
3 INTERPRETABLE POLICY LEARNING
We first introduce INTERPOLE’s model of behavior, formalizing notions of decision dynamics and decision boundaries. In the next section, we suggest a Bayesian algorithm for model-learning from data.
Problem Setup Consider a partially-observable decision-making environment in discrete time. At each step t, the agent takes action at ∈ A and observes outcome zt ∈ Z.1 We have at our disposal an observed dataset of demonstrations D={(ai1, zi1, . . . , aiτi , z i τi)} n i=1 by an agent, τi being the length of the i-th trajectory (we shall omit indices i unless required). Denote by ht . =(a1, z1, . . . , at−1, zt−1) the observed history at the beginning of step t, where h1 . =∅. Analogously, let Ht . = (A × Z)t−1 indicate the set of all possible histories at the start of step t, where H1 . = {∅}, and let H .= ∪∞t=1Ht.
A proper policy π is a mapping π ∈ ∆(A)H from observed histories to action distributions, where π(a|h) is the probability of taking action a given h. We assume that D is generated by an agent acting according to some behavioral policy πb. The problem we wish to tackle, then, is precisely how to obtain an interpretable parameterization of πb. We proceed in two steps: First, we describe a parsimonious belief-update process for accumulating histories—which we term decision dynamics. Then, we take beliefs to actions via a probabilistic mapping—which gives rise to decision boundaries.
Decision Dynamics We model belief-updates by way of an input-output hidden Markov model (IOHMM) identified by the tuple (S,A,Z, T,O, b1), with S being the finite set of underlying states. T ∈ ∆(S)S×A denotes the transition function such that T (st+1|st, at) gives the probability of transitioning into state st+1 upon action at in state st, and O ∈ ∆(Z)A×S denotes the observation function such that O(zt|at, st+1) gives the probability of observing zt after taking action at and transitioning into state st+1. Finally, let beliefs bt ∈ ∆(S) indicate the probability bt(s) that the
1While we take it here that Z is finite, our method can easily be generalized to allow continuous observations.
environment exists in any state s ∈ S at time t, and let b1 give the initial state distribution. Note that—unlike in existing uses of the IOHMM formalism—these “probabilities” are for representing the thought process of the human, and may freely diverge from the actual mechanics of the world. To aggregate observed histories as beliefs, we identify bt(s) with P(st = s|ht)—an interpretation that leads to the recursive belief-update process (where in our problem, quantities T,O, b1 are unknown):
bt+1(s ′) ∝ ∑ s∈S bt(s)T (s ′|s, at)O(zt|at, s′) (1)
A key distinction bears emphasis: We do not require that this latter set of quantities correspond to (external) environment dynamics—and we do not obligate ourselves to recover any such notion of “true” parameters. To do so would imply the assumption that the agent in fact performs exactly unbiased inference on a perfectly known model of the environment, which is restrictive. It is also unnecessary, since our mandate is simply to model the (internal) mechanics of decision-making— which could well be generated from possibly biased beliefs or imperfectly known models of the world. In other words, our objective (see Equation 3) of simultaneously determining the most likely beliefs (cf. decision dynamics) and policies (cf. decision boundaries) is fundamentally more parsimonious.
Decision Boundaries Given decision dynamics, a policy is then equivalently a map π ∈ ∆(A)∆(S). Now, what is an interpretable parameterization? Consider the three-state example in Figure 1. We argue that a probabilistic parameterization that directly induces “decision regions” (cf. panel 1b) over the belief simplex is uniquely interpretable. For instance, strong beliefs that a patient has underlying mild cognitive impairment may map to the region where a specific follow-up test is promptly prescribed; this parameterization allows clearly locating such regions—as well as their boundaries.
Precisely, we parameterize policies in terms of |A|-many “mean” vectors that correspond to actions:
π(a|b) = e−η‖b−µa‖ 2 / ∑ a′∈A e −η‖b−µa′‖ 2 , ∑ s∈S µa(s) = 1 (2)
where η ≥ 0 is the inverse temperature, ‖ · ‖ the `2-norm, and µa ∈ R|S| the mean vector corresponding to action a ∈ A. Intuitively, mean vectors induce decision boundaries (and decision regions) over the belief space ∆(S): At any time, the action whose corresponding mean is closest to the current belief is most likely to be chosen. In particular, lines that are equidistant to the means of any pair of actions form decision boundaries between them. The inverse temperature controls the transitions between such boundaries: A larger η captures more deterministic behavior (i.e. more “abrupt” transitions), whereas a smaller η captures more stochastic behavior (i.e. “smoother” transitions). Note that the case of η = 0 recovers policies that are uniformly random, and η →∞ recovers argmax policies. A second distinction is due: The exponentiated form of Equation 2 should not be confused with typical Boltzmann [27] or MaxEnt [39] policies common in RL: These are indirect parameterizations via optimal/soft q-values, which themselves require approximate solutions to optimization problems; as we shall see in our experiments, the quality of learned policies suffers as a result. Further, using q-values would imply the assumption that the agent in fact behaves optimally w.r.t. an (objectively) “true” class of reward functions—e.g. linear—which is restrictive. It is also unnecessary, as our mandate is simply to capture their (subjective) tendencies toward different actions—which are generated from possibly suboptimal policies. In contrast, by directly partitioning the belief simplex into probabilistic “decision regions”, INTERPOLE’s mean-vector representation can be immediately explained and understood.
Learning Objective In a nutshell, our objective is to identify the most likely parameterizations T , O, b1 for decision dynamics as well as η, {µa}a∈A for decision boundaries, given the observed data:
Given: D, S,A, Z Determine: T,O, b1, η, {µa}a∈A
(3)
Next, we propose a Bayesian algorithm that finds the maximum a posteriori (MAP) estimate of these quantities. Figure 2 illustrates the problem setup.
4 BAYESIAN INTERPRETABLE POLICY LEARNING
Denote with θ .= (T,O, b1, η, {µa}a∈A) the set of parameters to be determined, and let θ be drawn from some prior P(θ). In addition, denote with D̄ = {(si1, . . . , siτi+1)} n i=1 the set of underlying (unobserved) state trajectories, such that D ∪ D̄ gives the complete (fully-observed) dataset. Then the complete likelihood—of the unknown parameters θ with respect to D∪D̄—is given by the following:
P(D, D̄|θ) = n∏ i=1 τ∏ t=1
π(at|bt[T,O, b1, ht−1])︸ ︷︷ ︸ action likelihoods
× n∏ i=1 b1(s1) τ∏ t=1
T (st+1|st, at)O(zt|at, st+1)︸ ︷︷ ︸ observation likelihoods (4)
where π(·|·) is described by η and {µa}a∈A, and each bt[·] is a function of T , O, b1, and ht−1 (Equation 1). Since we do not have access to D̄, we propose an expectation-maximization (EM)-like algorithm for maximizing the posterior P(θ|D) = P(D|θ)P(θ)/ ∫ P(D|θ)dP(θ) over the parameters:
Algorithm 1 Bayesian INTERPOLE 1: Parameters: learning rate w ∈ R+ 2: Input: dataset D = {hiτi+1} n i=1, prior P(θ)
3: Sample θ̂0 from P(θ) 4: for k = 1, 2, . . . do 5: Compute P(D̄|D, θ̂k−1) .Appendix A.1 6: Compute ∇θQ(θ;θ̂k−1) at θ̂k−1 .Appendix A.2 7: θ̂k ← θ̂k−1 + w[∇θQ(θ; θ̂k−1) +∇θ logP(θ)]θ=θ̂k−1 8: while (6) 9: θ̂ ← θ̂k−1 10: Output: MAP estim. θ̂ .= (T̂ , Ô, b̂1, η̂, {µ̂a}a∈A)
Bayesian Learning Given an initial estimate θ̂0, we iteratively improve the estimate by performing the following steps at each iteration k:
• “E-step”: Compute the expected log-likelihood of the model parameters θ given the previous parameter estimate θ̂k−1, as follows:
Q(θ;θ̂k−1) . =ED̄|D,θ̂k−1 [logP(D,D̄|θ)] = ∑ D̄ logP(D,D̄|θ)P(D̄|D,θ̂k−1) (5)
where we compute the necessary marginalizations of joint distribution P(D̄|D, θ̂k−1) by way of a forward-backward procedure (detailed procedure given in Appendix A.1).
• “M-step”: Compute a new estimate θ̂k that improves the expected log-posterior—that is, such that:
Q(θ̂k; θ̂k−1) + logP(θ̂k) > Q(θ̂k−1; θ̂k−1) + logP(θ̂k−1) (6)
subject to appropriate non-negativity and normalization constraints on parameters, which can be achieved via gradient-based methods (detailed procedure given in Appendix A.2). We stop when it becomes no longer possible to obtain a new estimate further improving the expected log-posterior —that is, when the “M-step” cannot be performed. Algorithm 1 summarizes this learning procedure.
Explaining by Imitating Recall our original mandate—to give the most plausible explanation for behavior. Two questions can be asked about our proposal to “explain by imitating”—to which we now have precise answers: One concerns explainability, and the other concerns directness of explanations.
First, what constitutes the “most plausible explanation” of behavior? Now, INTERPOLE identifies this as the most likely parameterization of that behavior using a state-based model for beliefs and policies—but otherwise with no further assumptions. In particular, we are only positing that modeling beliefs over states helps provide an interpretable description of how an agent reasons (which we do have ample evidence for2)—but we are not assuming that the environment itself takes the form of a state-based model (which is an entirely different claim). Mathematically, the complete likelihood (Equation 4) highlights the difference between decision dynamics (which help explain the agent’s behavior) and “true” environment dynamics (which we do not care about). The latter are independent of the agent, and learning them would have involved just the observation likelihoods alone. In contrast, by jointly estimating T,O, b1 with η, {µa}a∈A according to both the observation- and action-likelihoods, we are learning the decision dynamics—which in general need not coincide with the environment, but which offer the most plausible explanation of how the agent effectively reasons.
The second question is about directness: Given the popularity of the IRL paradigm, could we have simply used an (indirect) reward parameterization, instead of our (direct) mean-vector parameterization? As it turns out, in addition to the “immediate” interpretability of direct, action-level representations,
2In healthcare, diseases are often modeled in terms of states, and beliefs over disease states are eminently transparent factors that medical practitioners (i.e. domain experts) readily comprehend and reason about [40, 41].
it comes with an extra perk w.r.t. computability: While it is (mathematically) possible to formulate a similar learning problem swapping out µ for rewards, in practice it is (computationally) intractable to perform in our setting. The precise difficulty lies in differentiating through quantities π(at|bt)—which in turn depend on beliefs and dynamics—in the action-likelihoods (proofs located in Appendix B):
Proposition 1 (Differentiability with q-Parameterizations) Consider softmax policies parameterized by q-values from a reward function, such that π(a|b) = eq∗(b,a)/ ∑ a′ e q∗(b,a ′) in lieu of Equation 2. Then differentiating through log π(at|bt) terms with respect to unknown parameters θ is intractable. In contrast, INTERPOLE avoids ever needing to solve any “forward” problem at all (and therefore does not require resorting to costly—and approximate—sampling-based workarounds) for learning:
Proposition 2 (Differentiability with µ-Parameterizations) Consider the mean-vector policy parameterization proposed in Equation 2. Differentiation through the log π(at|bt) terms with respect to the unknown parameters θ is easily and automatically performed using backpropagation through time.
5 ILLUSTRATIVE EXAMPLES
Three aspects of INTERPOLE deserve empirical demonstration, and we shall highlight them in turn:
• Interpretability: First, we illustrate the usefulness of our method in providing transparent explanations of behavior. This is our primary objective here—–of explaining by imitating.
• Accuracy: Second, we demonstrate that the faithfulness of learned policies is not given up for transparency. This shows that accuracy and interpretability are not necessarily opposed.
• Subjectivity: Third, we show INTERPOLE correctly recovers underlying explanations for behavior—–even if the agent is biased. This sets us apart from other state-based algorithms.
In order to do so, we show archetypical examples to exercise our framework, using both simulated and real-world experiments in the context of disease diagnosis. State-based reasoning is prevalent in research and practice: three states in progressive clinical dementia [42, 43], preterminal cancer screening [44, 45], or even—as recently shown—for cystic fibrosis [46] and pulmonary disease [47].
Decision Environments For our real-world setting, we consider the diagnostic patterns for 1,737 patients during sequences of 6-monthly visits in the Alzheimer’s Disease Neuroimaging Initiative [48] database (ADNI). The state space consists of normal functioning (“NL”), mild cognitive impairment (“MCI”), and dementia. For the action space, we consider the decision problem of ordering vs. not ordering an MRI test, which—while often informative of Alzheimer’s—is financially costly. MRI outcomes are categorized according to hippocampal volume: {“avg”, “above avg”, “below avg”, “not ordered”}; separately, the cognitive dementia rating-sum of boxes (“CDR-SB”) result— which is always measured—is categorized as: {“normal”, “questionable impairment”, “mild/severe dementia”} [42]. In total, the observation space therefore consists of the 12 combinations of outcomes. We also consider a simulated setting to better validate performance. For this we employ a diagnostic environment (DIAG) in the form of an IOHMM with certain (true) parameters T true, Otrue, btrue1 . Patients fall within diseased (s+) and healthy (s−) states, and vital-sign measurements available at every step are classified within positive (z+) and negative (z−) outcomes. For the action space, we consider the decision of continuing to monitor a patient (a=), or stopping and declaring a final diagnosis—and if so, a diseased (a+) or healthy (a−) declaration. If we assume agents have perfect knowledge of the true environment, then this setup is similar to the classic “tiger problem” for optimal stopping [49]. Lastly, we also consider a (more realistic) variant of DIAG where the agent’s behavior is instead generated by biased beliefs due to incorrect knowledge T,O, b1 6= T true, Otrue, btrue1 of the environment (BIAS). Importantly, this generates a testable version of real-life settings where decision-makers’ (subjective) beliefs often fail to coincide with (objective) probabilities in the world.
Benchmark Algorithms Where appropriate, we compare INTERPOLE against the following benchmarks: imitation by behavioral cloning [5] using RNNs for partial observability (R-BC); Bayesian IRL on POMDPs [9] equipped with a learned environment model (PO-IRL); a fully-offline counterpart [10] of Bayesian IRL (Off. PO-IRL); and an adaptation of model-based imitation learning [7] to partially-observable settings, with a learned IOHMM as the model (PO-MB-IL). Algorithms requiring learned models for interaction are given IOHMMs estimated using conventional methods [50]. Further information on environments and benchmark implementations is found in Appendix C.
Interpretability First, we direct attention to the potential utility of INTERPOLE as an investigative device for auditing and quantifying individual decisions. Specifically, modeling the evolution of an agent’s beliefs provides a concrete basis for analyzing the corresponding sequence of actions taken:
• Explaining Trajectories. Figure 3 shows examples of such decision trajectories for four real ADNI patients. Each vertex of the belief simplex corresponds to one of the three stable diagnoses, and each point in the simplex corresponds to a unique belief (i.e. probability distribution). The closer the point is to a vertex (i.e. state), the higher the probability assigned to that state. For instance, if the belief is located exactly in the middle of the simplex (i.e. equidistant from all vertices), then all states are believed to be equally likely. If the belief is located exactly on a vertex (e.g. directly on top of MCI), then this corresponds to an absolutely certainty of MCI being the underlying state. Patients (a) and (b) are “typical” patients who fit well to the overall learned policy. The former is a normally-functioning patient believed to remain around the decision boundary in all visits except the first; appropriately, they are ordered an MRI during approximately half of their visits. The latter is believed to be deteriorating from MCI towards dementia, hence prescribed an MRI in all visits.
• Identifying Belated Diagnoses. In many diseases, early diagnosis is paramount [51]. INTERPOLE allows detecting patients who appear to have been diagnosed significantly later than they should have. Patient (c) was ordered an MRI in neither of their first two visits—despite the fact that the “typical” policy would have strongly recommended one. At a third visit, the MRI that was finally ordered led to near-certainty of cognitive impairment—but this could have been known 12 months earlier! In fact, among all ADNI patients in the database, 6.5% were subject to this apparent pattern of “belatedness”, where a late MRI is immediately followed by a jump to near-certain deterioration.
• Quantifying Value of Information. Patient (d) highlights how INTERPOLE can be used to quantify the value of a test in terms of its information gain. While the patient was ordered an MRI in all of their visits, it may appear (on the surface) that the third and final MRIs were redundant—since they had little apparent affect on beliefs. However, this is only true for the factual belief update that occurred according to the MRI outcome that was actually observed. Having access to an estimated model of how beliefs are updated in the form of decision dynamics, we can also compute counterfactual belief updates—that is belief updates that could have occurred if the MRI outcome in question were to be different. In the particular case of patient (d), the tests were in fact highly informative, since (as it happened) the patient’s CDR-SB scores were suggestive of impairment, and (in the counterfactual) the doctor’s beliefs could have potentially leapt drastically towards MCI. On the other hand, among all MRIs ordered for ADNI patients, 19% may indeed have been unnecessary (i.e. triggering apparently insignificant belief-updates both factually as well as counterfactually).
Evaluating Interpretability through Clinician Surveys
To cement the argument for interpretability, we evaluated INTERPOLE by consulting nine clinicians from four different countries (United States, United Kingdom, the Netherlands, and China) for feedback. We focused on evaluating two aspects of interpretability regarding our method:
• Decision Dynamics: Whether the proposed representation of (possibly subjective) belief trajectories are preferable to raw action-observation trajectories—that is, whether decision dynamics are a transparent way of modeling how information is aggregated by decision-makers.
• Decision Boundaries: Whether the proposed representation of (possibly suboptimal) decision boudnaries are a more transparent way of describing policies, compared with the representation of reward functions (which is the conventional approach in the policy learning literature).
For the first aspect, we presented to the participating clinicians the medical history of an example patient from ADNI represented in three ways using: only the most recent action-observation, the complete action-observation trajectory, as well as the belief trajectory as recovered by INTERPOLE. Result: All nine clinicians preferred the belief trajectories over action-observation trajectories.
For the second aspect, we showed them the policies learned from ADNI by both Off. PO-IRL and INTERPOLE, which parameterize policies in terms of reward functions and decision boundaries respectively. Result: Seven out of nine clinicians preferred the representation in terms of decision boundaries over that offered by reward functions. Further details can be found in Appendix D.
INTERPOLE 0.6± 0.1 0.8± 0.2 5.38± 1.14
Accuracy Now, a reasonable question is whether such explainability comes at a cost: By learning an interpretable policy, do we sacrifice any accuracy? To be precise, we can ask the following questions:
• Is the belief-update process the same? For this, the appropriate metric is the discrepancy with respect to the sequence of beliefs—which we take to be ∑ tDKL(bt‖b̂t) (Belief Mismatch).
• Is the belief-action mapping the same? Our metric is the discrepancy with respect to the policy distribution itself—which we take to be∑ tDKL(πb(·|bt)‖π̂(·|b̂t)) (Policy Mismatch).
• Is the effective behavior the same? Here, the metrics are those measuring the discrepancy with respect to ground-truth actions observed (Action-Matching) for ADNI, and differences in stopping (Stopping Time Error) for DIAG.
Note that the action-matching and stopping time errors evaluate the quality of learned models in imitating per se, whereas belief mismatch and policy mismatch evaluate their quality in explaining.3
The results are revealing, if not necessarily surprising. To begin, we observe for the ADNI setting in Table 2 that INTERPOLE performs first- or second-best across all three action-matching based metrics; where it comes second, it does so only by a small margin to R-BC (bearing in mind that R-BC is specifically optimized for nothing but action-matching). Similarly for the DIAG setting, we observe in Table 3 that INTERPOLE performs the best in terms of stopping-time error. In other words, it appears that little—if any—imitation accuracy is lost by using INTERPOLE as the model.
Perhaps more interestingly, we also see in Table 3 that the quality of internal explanations is superior— in terms of both belief mismatch and policy mismatch. In particular, even though the comparators PO-MB-IL, PO-IRL, and Off. PO-IRL are able to map decisions through beliefs, they inherit the conventional approach of attempting to estimate true environment dynamics, which is unnecessary— and possibly detrimental—if the goal is simply to find the most likely explanation of behavior. Notably, while the difference in imitation quality among the various benchmarks is not tremendous, with respect to explanation quality the gap is significant—where INTERPOLE has great advantage.
Subjectivity Most significantly, we now show that INTERPOLE correctly recovers the underlying explanations, even if—or perhaps especially if—the agent is driven by subjective reasoning (i.e. with biased beliefs). This aspect sets INTERPOLE firmly apart from the alternative state-based techniques.
3Belief/policy mismatch are not applicable to ADNI since we have no access to ground-truth beliefs/policies.
INTERPOLE 3.8± 0.34 1.8± 0.6 2.14± 0.51
Consider the BIAS environment: Here the true environment dynamics are unchanged from DIAG, but no longer coincide with the agent’s decision dynamics. Specifically, we let the behavioral policy be generated using erroneous parameters O(z−|a=, s+) < Otrue(z−|a=, s+); that is, the doctor now incorrectly believes the test to have a smaller false-negative rate than it does in reality —thus biasing their beliefs regarding patient states.
Now suppose we wish to recover the decision boundary from the demonstrations—that is, at what confidence threshold does a doctor commit to a healthy diagnosis? Figure 4 shows that both INTERPOLE and PO-MB-IL appear to recover the correct effective behavior: Starting from a neutral prior, doctors tend to stop and issue a “healthy” diagnosis if the first two observations return negative signals. Importantly, INTERPOLE also correctly locates the confidence target∼ 90%. On the other hand, PO-MB-IL—which first attempts to estimate the environment’s true parameters—ends up learning a policy on the basis of miscalibrated beliefs, thereby incorrectly explaining the same effective behavior with a lower confidence target ∼ 70%. Finally, through similarly benchmarked metrics, Table 4 further confirms INTERPOLE’s advantage in providing the (correct) explanations.
6 DISCUSSION
Three points deserve brief comment in closing. First, while we gave prominence to several examples of how INTERPOLE may be used to audit and improve decision-making behavior, the potential applications are not limited to these use cases. Broadly, the chief proposition is that having quantitative descriptions of belief trajectories provides a concrete language that enables investigating observed actions, including outliers and variations in behavior: On that basis, different statistics and post-hoc analyses can be performed on top of INTERPOLE’s explanations (see Appendix C for examples).
Second, a reasonable question is whether or not it is reasonable to assume access to the state space. For this, allow us to reiterate a subtle distinction. There may be some “ground-truth” external state space that is arbitrarily complex or even impossible to discover, but—as explained—we are not interested in modeling this. Then, there is the internal state space that an agent uses to reason about decisions, which is what we are interested in. In this sense, it is certainly reasonable to assume access to the state space, which is often very clear from medical literature [42–47]. Since our goal is to obtain interpretable representations of decision, it is therefore reasonable to cater precisely to these accepted state spaces that doctors can most readily reason with. Describing behavior in terms of beliefs over these (already well-understood) states is one of the main contributors to the interpretability of our method.
Finally, it is crucial to keep in mind that INTERPOLE does not claim to identify the real intentions of an agent: humans are complex, and rationality is—of course—bounded. What it does do, is to provide an interpretable explanation of how an agent is effectively behaving, which—–as we have seen for diagnosis of ADNI patients—–offers a yardstick by which to assess and compare trajectories and subgroups. In particular, INTERPOLE achieves this while adhering to our key criteria for healthcare settings, and without imposing assumptions of unbiasedness or optimality on behavioral policies.
ACKNOWLEDGMENTS
This work was supported by the US Office of Naval Research (ONR) and Alzheimer’s Research UK (ARUK). We thank the clinicians who participated in our survey, the reviewers for their valuable feedback, and the Alzheimer’s Disease Neuroimaging Initiative for providing the ADNI dataset.
A DETAILS OF THE ALGORITHM
A.1 FORWARD-BACKWARD PROCEDURE
We compute the necessary marginalizations of the joint distribution P(D̄|D, θ̂) using the forwardbackward algorithm. Letting xt:t′ = {xt, xt+1, . . . , xt′} for any time-indexed quantity xt, the forward messages are defined as αt(s) = P(st = s,a1:t−1, z1:t−1|θ̂), which can be computed dynamically as
αt+1(s ′) = P(st+1 = s′,a1:t, z1:t|θ̂) = ∑ s∈S P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
= ∑ s∈S αt(s)π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)
∝ ∑ s∈S αt(s)T̂ (s ′|s, at)Ô(zt|at, s′)
with initial case α1(s) = P(s1 = s) = b1(s). The backward messages are defined as βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂), which can also be computed dynamically as
βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂) = ∑ s′∈S P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂)
= ∑ s′∈S π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
∝ ∑ s′∈S T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
with initial case βτ+1(s) = P(∅|sτ+1 = s,a1:τ , z1:τ , θ̂) = 1.
Then, the marginal probability of being in state s at time t given the dataset D and the estimate θ̂ can be computed as
γt(s) = P(st = s|D, θ̂) = P(st = s|a1:τ , z1:τ , θ̂) ∝ P(st = s,a1:τ , z1:τ |θ̂) = αt(s)β(s)
and similarly, the marginal probability of transitioning from state s to state s′ at the end of time t given the dataset D and the estimate θ̂ can be computed as
ξt(s, s ′) = P(st = s, st+1 = s′|D, θ̂) ∝ P(st = s, st+1 = s′,a1:τ , z1:τ |θ̂) = P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
× P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂) = αt(s)π̂(at|bt)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) ∝ αt(s)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) .
A.2 GRADIENT-ASCENT PROCEDURE
Taking the gradient of the expected log-likelihood Q(θ; θ̂) in (5) with respect to the unknown parameters θ = (T,O, b1, η, µa∈A) first requires computing the Jacobian matrix ∇btbt′ for 1 ≤
t < t′ ≤ τ , where (∇bb′)ij = ∂b′(i)/∂b(j) for i, j ∈ S. This can be achieved dynamically as ∇btbt′ = ∇bt+1bt′∇btbt+1 with initial case∇bt′ bt′ = I , where
(∇btbt+1)ij = ∂bt+1(i)
∂bt(j)
= ∂
∂bt(j)
[ ∑ x∈S bt(x)T (i|x, at)O(zt|at, i)∑
x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′) ] =
T (i|j, at)O(zt|at, i)∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′)
− ∑ x′∈S T (x
′|j, at)O(zt|at, x′) ( ∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′))2 .
A.2.1 PARTIAL DERIVATIVES
The derivative of Q(θ; θ̂) with respect to T (s′|s, a) is
∂Q(θ; θ̂)
∂T (s′|s, a) =
∂
∂T (s′|s, a) n∑ i=1 [ τ∑ t=1 I{at = a} ∑ x∈S ∑ x′∈S ξt(x, x ′) log T (x′|x, a)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a) + τ∑ t=2 ∂ log π(at|bt) ∂T (s′|s, a) ]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇T (s′|s,a)bt′+1
] ,
where
(∇bt log π(at|bt))1j = ∂ log(at|bt) ∂bt(j)
= ∂
∂bt(j)
( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 (bt(j)− µa(j))
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A π(a|bt)(bt(j)− µa(j))
and
(∇T (s′|s,a)bt′+1)i1 = ∂bt′+1(i)
∂T (s′|s, a)
= ∂
∂T (s′|s, a)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a} ( I{i = s′}bt′(s)O(zt′ |a, s′)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′)
− bt ′(s)O(zt′ |a, s′) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to O(z|a, s′) is
∂Q(θ; θ̂)
∂O(z|a, s′) =
∂
∂O(z|a, s′) n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} ∑ x′∈S γt+1(x ′) logO(z|a, x′)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′) + τ∑ t=2 ∂ log π(at|bt) ∂O(z|a, s′) ]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇O(z|a,s′)bt′+1
] ,
where
(∇O(z|a,s′)bt′+1)i1 = ∂bt′+1(i)
∂O(z|a, s′)
= ∂
∂O(z|a, s′)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a, zt′ = z} ( I{i = s′}∑x∈S bt′(x)T (s′|x, a)∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′)
− ∑ x∈S bt′(x)T (s
′|x, a) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to b1(s) is
∂Q(θ; θ̂)
∂b1(s) =
∂
∂b1(s) n∑ i=1 [∑ x∈S γ1(x) log b1(x) + τ∑ t=1 log π(at|bt) ]
= n∑ i=1
[ γ1(s)
b1(s) + τ∑ t=1 ∇bt log π(at|bt)∇b1(s)bt
] ,
where (∇b1(s)bt)i1 = (∇b1bt)is.
The derivative of Q(θ; θ̂) with respect to η is
∂Q(θ; θ̂)
∂η =
∂
∂η n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂η ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 ‖bt − µa‖2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A π(a|bt)‖bt − µa‖2 ) .
Finally, the derivative of Q(θ; θ̂) with respect to µa(s) is
∂Q(θ; θ̂)
∂µa(s) =
∂
∂µa(s) n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂µa(s) ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1
( 2ηI{at = a}(bt(s)− µa(s))− 2η e−η‖bt−µa‖ 2∑
a′∈A e −η‖bt−µa′‖2
(bt(s)− µa(s))
)
= n∑ i=1 τ∑ t=1 (2ηI{at = a}(bt(s)− µa(s))− 2ηπ(a|bt)(bt(s)− µa(s))
= n∑ i=1 τ∑ t=1 2η(I{at = a} − π(a|bt))(bt(s)− µa(s)) .
B PROOFS OF PROPOSITIONS
B.1 PROOF OF PROPOSITION 1
First, denote with q∗R ∈ R∆(S)×A the optimal (belief-state) q-value function with respect to the underlying (state-space) reward function R ∈ RS×A, and denote with v∗ ∈ R∆(S) the corresponding optimal value function v∗R(b) = softmaxa′∈A q ∗ R(b, a
′). Now, fix some component i of parameters θ; we wish to compute the derivative of log π(a|b) with respect to θi:
∂ ∂θi log π(a|b) = ∂ ∂θi
( q∗R(b, a)− v∗R(b) ) = ∂
∂θi
( q∗R(b, a)− log ∑ a′∈A eq ∗ R(b,a ′) )
= ∂
∂θi q∗R(b, a)− ∑ a′∈A
( eq ∗ R(b,a
′)∑ a′′∈A e q∗R(b,a ′′) · ∂ ∂θi q∗R(b, a ′)
)
= ∂
∂θi q∗R(b, a)− ∑ a′∈A π(a′|b) ∂ ∂θi q∗R(b, a ′)
= ∂
∂θi q∗R(b, a)− Ea′∼π(·|b)
[ ∂
∂θi q∗R(b, a ′) ] where we make explicit here the dependence onR, but note that it is itself a parameter; that is, R = θj for some j. We see that this in turn requires computing the partial derivative ∂q∗R(b, a)/∂θi. Let γ be some appropriate discount rate, and denote with ρR ∈ R∆(S)×A the effective (belief-state) reward ρR(b, a) . = ∑ s∈S b(s)R(s, a) corresponding to R. Further, let
P(b′|b, a) = ∑ z∈Z P(z|b, a)P(b′|b, a, z)
= ∑ z∈Z (∑ s∈S ∑ s′∈S b(s)T (s′|s, a)O(z|a, s′) ) δ ( b′ − ∑ s∈S b(s)T (·|s, a)O(z|a, ·)∑ s∈S ∑ s′∈S b(s)T (s ′|s, a)O(z|a, s′) ) denote the (belief-state) transition probabilities induced by T and O, where δ is the Dirac delta function such that δ(b′) integrates to one if and only if b′ = 0 is included in the integration region. Then the partial ∂q∗R(b, a)/∂θi is given as follows:
∂
∂θi q∗R(b, a) =
∂
∂θi
( ρR(b, a) + γ ∫ b′∈∆(S) P(b′|b, a)v∗R(b′)db′ )
= ∂
∂θi ρR(b, a) + γ ∫ b′∈∆(S) v∗R(b ′) ∂
∂θi P(b′|b, a)db′︸ ︷︷ ︸
ρR,i(b,a)
+ γ ∫ b′∈∆(S) P(b′|b, a)Ea′∼π(·|b′) [ ∂ ∂θi q∗R(b ′, a′) ] db′
from which we observe that ∂q∗R(b, a)/∂θi is a fixed point of a certain Bellman-like operator. Specifically, fix any function f ∈ R∆(S)×A; then ∂q∗R(b, a)/∂θi is the fixed point of the operator
T πR,i : R∆(S)×A → R∆(S)×A defined as follows:
(T πR,if)(b, a) = ρR,i(b, a) + γ ∫ b′∈∆(S) P(b′|b, a) ∑ a′ π(a′|b′)f(b′, a′)db′
which takes the form of a “generalized” Bellman operator on q-functions for POMDPs, where for brevity here we have written ρR,i(b, a) to denote the expression ∂∂θi ρR(b, a) + γ ∫ b′∈∆(S) v ∗ R(b ′) ∂∂θiP(b
′|b, a)db′. Mathematically, this means that a recursive procedure can in theory be defined—cf. “∇q-iteration”, analogous to q-iteration; see e.g. [52]—that may converge on the gradient under appropriate conditions. Computationally, however, this also means that taking a single gradient is at least as hard as solving POMDPs in general.
Further, note that while typical POMDP solvers operate by taking advantage of the convexity property of ρR(b, a)—see e.g. [53]—here there is no such property to make use of: In general, it is not the case that ρR,i(b, a) is convex. To see this, consider the following counterexample: Let S . = {s−, s+},A . = {a=}, Z . = {z−, z+}, T (s−|s−, a=) = T (s+|s+, a=) = p = 1, O(z−|a=, s−) = O(z+|a=, s+) = 1/4, b1(s+) = 1/2, R(s−, a=) = 0, R(s+, a=) = 1/2, and γ = 1/2. For simplicity, we will simply write b instead of b(s+). Note that:
q∗R(b, a=) = b ∑ t=0 γtR(s+, a=) + (1− b) ∑ t=0 γtR(s−, a=) = b
v∗R(b) = log ∑
a∈{a=}
eq ∗ R(b,a) = log eq ∗ R(b,a=) = q∗R(b, a=) = b
P(z+|b, a=) = 1 4 (bp+ (1− b)(1− p)) + 3 4 (b(1− p) + (1− b)p) = 1 4 b+ 3 4 (1− b) P(z−|b, a=) = 1
4 (b(1− p) + (1− b)p) + 3 4 (bp+ (1− b)(1− p)) = 1 4 (1− b) + 3 4 b
b′|b, a=, z+ = P(s′ = s+, z+|b, a=)
P(z+|b, a=) =
1 4bp+ 3 4 (1− b)(1− p)
P(z+|b, a=) =
1 4b
1 4b+ 3 4 (1− b)
b′|b, a=, z− = P(s′ = s+, z−|b, a=)
P(z−|b, a=) =
1 4 (1− b)(1− p) + 3 4bp
P(z−|b, a=) =
3 4b
1 4 (1− b) + 3 4b
P(b′|b, a=) = P(z+|b, a=) if b′ = b′|b, a=, z+ P(z−|b, a=) if b′ = b′|b, a=, z− 0 otherwise
Now, let the elements of θ be ordered such that p is the i-th element, and consider ρR,i(b, a)— evaluated at p = 1:
ρR,i(b, a=) . =
∂
∂p ρR(b, a=) + γ ∫ b′∈∆(S) v∗R(b ′) ∂ ∂p P(b′|b, a=)db′
= 1
2
( v∗R(b ′|b, a=, z+) ∂
∂p P(z+|b, a=) + v∗R(b′|b, a=, z−)
∂
∂p P(z−|b, a=) ) = 1
2
( 1 4b
1 4b+ 3 4 (1− b)
( 1
4 b− 1 4 (1− b)− 3 4 b+ 3 4 (1− b) ) + 3 4b
1 4 (1− b) + 3 4b
( −1
4 b+
1 4 (1− b) + 3 4 b− 3 4 (1− b) )) Clearly ρR,i(b, a=) cannot be convex since ρR,i(1/2, a=) = 0 and ρR,i(1, a=) = 0 but ρR,i(3/4, a=) > 0.
B.2 PROOF OF PROPOSITION 2
In contrast, unlike the indirect q-value parameterization above (which by itself requires approximate solutions to optimization problems), the mean-vector parameterization of INTERPOLE maps beliefs
directly to distributions over actions. Now, the derivatives of log π(a|b) are given as closed-form expressions in Appendices A.1 and A.2.
In particular, note that each bt is computed through a feed-forward structure, and therefore can easily be differentiated with respect to the unknown parameters θ through backpropagation through time: Each time step leading up to an action corresponds to a “hidden layer” in a neural network, and the initial belief corresponds to the “features” that are fed into the network; the transition and observation functions correspond to the weights between layers, the beliefs at each time step correspond to the activations between layers, the actions themselves correspond to class labels, and the action likelihood corresponds to the loss function (see Appendices A.1 and A.2).
Finally, note that computing all of the forward-backward messages αt and βt in Appendix A.1 has complexityO(nτS2), computing all of the Jacobian matricies∇btbt′ in Appendix A.2 has complexity O(nτ2S3), and computing all of the partial derivatives given in Appendix A.2 has complexity at most O(nτ2S2AZ). Hence, fully differentiating the expected log-likelihood Q(θ; θ̂) with respect to the unknown parameters θ has an overall (polynomial) complexity O(nτ2S2 max{S,AZ}).
C EXPERIMENT PARTICULARS
C.1 DETAILS OF DECISION ENVIRONMENTS
ADNI We have filtered out visits without a CDR-SB measurement, which is almost always taken, and visits that do not occur immediately after the six-month period following the previous visit but instead occur after 12 months or later. This filtering leaves 1,626 patients with typically three consecutive visits each. For MRI outcomes, average is considered to be within half a standard deviation of the population mean. Since there are only two actions in this scenario, we have set η = 1 and relied on the distance between the two means to adjust for the stochasticity of the estimated policy—closer means being somewhat equivalent to a smaller η.
DIAG We set T true(s−|s−, ·) = T true(s+|s+, ·) = 1, meaning patients do not heal or contract the diseases as the diagnosis progresses, Otrue(z−|a=, s+) = Otrue(z+|a=, s−) = 0.4, meaning measurements as a test have a false-negative and false-positive rates of 40%, and btrue1 (s+) = 0.5. Moreover, the behavior policy is given by T = T true, O = Otrue, b1 = btrue1 , η = 10, µa=(s+) = 0.5, and µa−(s−) = µa+(s+) = 1.3. Intuitively, doctors continue monitoring the patient until they are 90% confident in declaring a final diagnosis. In this scenario, T and η are assumed to be known. The behavior dataset is generated as 100 demonstration trajectories.
BIAS We set all parameters exactly the same way we did in DIAG with one important exception: now O(s−|a=, z+) = 0.2 while it is still the case that Otrue(z−|a=, s+) = 0.4, meaning O 6= Otrue anymore. In this scenario, b1 is also assumed to be known (in addition to T and η) to avoid any invariances between b1 and O that we have encountered during training. The behavioral dataset is generated as 1000 demonstration trajectories.
C.2 DETAILS OF BENCHMARK ALGORITHMS
R-BC We train an RNN whose inputs are the observed histories ht and whose outputs are the predicted probabilities π̂(a|ht) of taking action a given the observed history ht. The network consists of an LSTM unit of size 64 and a fully-connected hidden layer of size 64. We minimize the crossentropy loss L = − ∑n i=1 ∑τ t=1 ∑ a∈A I{at = a} log π̂(a|ht) using Adam optimizer with learning rate 0.001 until convergence, that is when the cross-enropy loss does not improve for 100 consecutive iterations.
PO-IRL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. The reward parameter R is initialized as R̂0(s, a) = εs,a where εs,a ∼ N (0, 0.0012). Then, it is estimated via Markov chain Monte Carlo (MCMC) sampling, during which new candidate samples are generated by adding Gaussian noise with standard deviation 0.001 to the last sample. A final estimate is formed by averaging every tenth sample among the second set of 500 samples, ignoring the first 500 samples. In order to compute optimal q-values, we have used an off-the-shelf POMDP solver available at https://www.pomdp.org/code/index.html.
Off. PO-IRL All parameters are initialized exactly the same way as in PO-IRL. Then, both the IOHMM parameters T , O, and b1, and the reward parameter R are estimated jointly via MCMC sampling. When generating new candidate samples, with equal probabilities, we have either sampled new T , O, and b1 from IOHMM posterior (without changing R) or obtained a new R the same way we did in PO-IRL (without changing T , O, and b1). A final estimate is formed the same way as in PO-IRL.
PO-MB-IL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. Given the IOHMM parameters, we parameterized policies the same way we did in INTERPOLE, that is as described in (2). The policy parameters {µa}a∈A are initialized as µ̂0a(s) = (1/|S| + εa,s)/ ∑ s′∈S(1/|S| + εa,s′) where εa,s′ ∼ N (0, 0.0012). Then, they are estimated according solely to the action likelihoods in (4) using the EM algorithm. The expected log-posterior is maximized using Adam optimizer with learning rate 0.001 until convergence, that is when the expected log-posterior does not improve for 100 consecutive iterations.
INTERPOLE All parameters are initialized exactly the same way as in PO-MB-IL. Then, the IOHMM parameters T , O, and b1, and the policy parameters {µa}a∈A are estimated jointly according to both the action likelihoods and the observation likelihoods in (4). The expected log-posterior is again maximized using Adam optimizer with learning rate 0.001 until convergence.
C.3 FURTHER EXAMPLE: POST-HOC ANALYSES
Policy representations learned by INTEPOLE provide users with means to derive concrete criteria that describe observed behavior in objective terms. These criteria, in turn, enable the quantitative analyses of the behavior using conventional statistical methods. For ADNI, we have considered two such criteria: belatedness of individual diagnoses and informativeness of individual tests. Both of these criteria are relevant to the discussion of early diagnosis, which is paramount for Alzheimer’s disease [51] as we have already mentioned during the illustrative examples.
Formally, we consider the final diagnoses of a patient to be belated if (i) the patient was not ordered an MRI in one of their visits despite the fact that an MRI being ordered was the most likely outcome according to the policy estimated by INTERPOLE and (ii) the patient was ordered an MRI in a later visit that led to a near-certain diagnosis with at least 90% confidence according to the underlying beliefs estimated by INTERPOLE. We consider An MRI to be uninformative if it neither (factually) caused nor could have (counterfactually) caused a significant change in the underlying belief-state of the patient, where an insignificant change is half a standard deviation less than the mean factual change in beliefs estimated by INTERPOLE.
Having defined belatedness and informativeness, one can investigate the frequency of belated diagnoses and uninformative MRIs in different cohorts of patients to see how practice varies between one cohort to another. In Table 5, we do so for six cohorts: all of the patients, patients who are over 75 years old, patients with apoE4 risk factor for dementia, patients with signs of MCI or dementia since their very first visit, female patients, and male patients. Note that increasing age, apoE4 allele, and female gender are known to be associated with increased risk of Alzheimer’s disease [54–57]. For instance, we see that uninformative MRIs are much more prevalent among patients with signs of MCI or dementia since their first visit. This could potentially be because these patients are monitored much more closely than usual given their condition.
Alternatively, one can divide patients into cohorts based on whether they have a belated diagnoses or an uninformative MRI to see which features these criteria correlate with more. We do so in Table 6. For instance, we see that a considerable percentage of belated diagnoses are seen among male patients.
C.4 FURTHER EXAMPLE: DECISION TREES
Clinical practice guidelines are often given in the form of decision trees, which usually have vague elements that require the judgement of the practitioner [58, 59]. For example, the guideline could ask the practitioner to quantify risks, side effects, or improvements in subjective terms such as being significant, serious, or potential. Using direct policy learning, how vague elements like these are commonly resolved in practice can be learned in objective terms.
Formulating policies in terms of IOHMMs and decision boundaries is expressive enough to model decision trees. An IOHMM with deterministic observations, that is O(z|a, s′) = 1 for some z ∈ Z and for all a ∈ A, s ∈ S, essentially describes a finite-state machine, inputs of which are equivalent to the observations. Similarly, a deterministic decision tree can be defined as a finite-state machine with no looping sequence of transitions. The case where the observations are probabilistic rather than deterministic correspond to the case where the decision tree is traversed in a probabilistic way so that each path down the tree has a probability associated with it at each step of the traversal.
As a concrete example of modeling decision trees in terms of IOHMMs, consider the scenario of diagnosing a disease with two sub-types: Disease-A and Disease-B. Figure 5a depicts the policy of the doctors in the form of a decision tree. Each newly-arriving patient is first tested for the disease in a general sense without any distinction between the two sub-types it has. The patient is then tested
for a specific sub-type of the disease only if the doctors deem there is a significant risk that the patient is diseased. Note that which exact level of confidence constitutes as a significant risk is left vague in the decision tree. By modeling this scenario using our framework, we can learn: (i) how the risk is determined based on initial test results and (ii) what amount of risk is considered significant enough to require a subsequent test for the sub-type.
Let S = {INI, HLT, DIS, DSA, DSB}, where INI denotes that the patient has newly arrived, HLT denotes that the patient is healthy, DIS denotes that the patient is diseased, DSA denotes that the patient has Disease-A, and DSB denotes that the patient has Disease-B. Figure 5b depicts the state space S with all possible transitions. Note that the initial belief b1 is such that b1(INI) = 1. Let A = {TST-DIS, TST-TYP, STP-HLT, STP-DSA, STP-DSB}, where TST-DIS denotes testing for the disease, TST-TYP denotes testing for the sub-type of the disease, and the remaining actions denote stopping and diagnosing the patient with one of the terminal states, namely states HLT, DSA, and DSB.
After taking action a1 = TST-DIS and observing some initial test result z1 ∈ Z, the risk of disease, which is the probability that the patient is diseased, can be calculated with a simple belief update:
b2(DIS) ∝ ∑ s∈S b1(s)T (DIS|s, TST-DIS)O(z1|TST-DIS, DIS)
= T (DIS|INI, TST-DIS)O(z1|TST-DIS, DIS) . Moreover, we can say that the doctors are more likely to test for the sub-type of the disease as opposed to stopping and diagnosing the patient as healthy, that is πb(TST-TYP|b2) > πb(STP-HLT|b2), when
b2(DIS) > µTST-TYP(DIS) + µSTP-HLT(DIS)
2
assuming µTST-TYP(DIS) > µSTP-HLT(DIS). Note that there are only two possible actions at the second time step: actions TST-TYP and STP-HLT.
D DETAILS OF THE CLINICIAN SURVEYS
Each participant was provided a short presentation explaining (1) the ADNI dataset and the decisionmaking problem we consider, (2) what rewards and reward functions are, (3) what beliefs and belief simplices are, and (4) how policies can be represented in terms of reward functions as well as decision boundaries. Then, they were asked two multiple-choice questions, one that is strictly about representing histories, and one that is strictly about representing policies. Importantly, the survey was conducted blindly—i.e. they were given no context whatsoever as pertains this paper and our proposed method. The question slides can be found in Figures 6 and 7. Essentially, each question first states a hypothesis and shows two/three representations relevant to the hypothesis stated. Then, the participant is asked which of the representations shown most readily expresses the hypothesis. Here are the full responses that we have received, which includes some additional feedback:
• Clinician 1 Question 1: C > B > A Question 2: B Additional Feedback: The triangle was initially more confusing than not, but the first example (100% uncertainty) was helpful. It isn’t clear how the dots in the triangle are computed. Are these probabilities based on statistics? Diagram is always better than no diagram.
• Clinician 2 Question 1: C > B > A Question 2: B Additional Feedback: I always prefer pictures to tables, they are much easier to understand.
• Clinician 3 Question 1: C > B > A Question 2: B Additional Feedback: Of course the triangle is more concise and easier to look at. But how is the decision boundary obtained? Does the decision boundary always have to be parallel to one of the sides of the triangle?
• Clinician 4 Question 1: C > B > A Question 2: B Additional Feedback: [Regarding Question 1,] representation A and B do not show any interpretation of the diagnostic test results, whereas representation C does. I think doctors are most familiar with representation B, as it more closely resembles the EHR. Although representation C is visually pleasing, I’m not sure how the scale of the sides of the triangle should be interpreted. [Regarding Question 2,] again I like the triangle, but it’s hard to interpret what the scale of the sides of the triangle mean. I think option A is again what doctors are more familiar with.
• Clinician 5 Question 1: C > B > A Question 2: A
• Clinician 6 Question 1: C > B > A Question 2: A
• Clinician 7 Question 1: C Question 2: B Additional Feedback: I thought I’d share with you my thoughts on the medical aspects in your scenario first (although I realise you didn’t ask me for them). [...] The Cochrane review concludes that MRI provides low sensitivity and specificity and does not qualify it as an add on test for the early diagnosis due to dementia (Lombardi G et. al. Cochrane database 2020). The reason for MRI imaging is (according to the international guidelines) to exclude non-degenerative or surgical causes of cognitive impairment. [...] In your example the condition became apparent when the CDR-SB score at Month 24 hit 3.0 (supported by the sequence of measurements over time showing worsening CDR-SB score). I imagine the MRI was triggered by slight worsening in the CDR-SB score (to exclude an alternative diagnosis). To answer your specific questions: Q1. The representation C describes your (false) hypothesis that it was the MRI that made the diagnosis of MCI more likely/apparent the best—I really like the triangles. Q2. I really like the decision boundary.
• Clinician 8 Question 1: C > B > A Question 2: B
• Clinician 9 Question 1: C Question 2: B Additional Feedback: Q1. Representation C gives the clearest illustration of the diagnostic change following MRI. However, the representation of beliefs on a continuous spectrum around discrete cognitive states could be potentially confusing given that cognitive function is itself a continuum (and ‘MCI’, ‘Dementia’ and ‘NL’ are stations on a spectrum rather than discrete states). Also, while representation C is the clearest illustration, it is the representation that conveys the least actual data and it isn’t clear from the visualisation exactly what each shift in 2D space represents. Also, the triangulation in ‘C’ draws a direct connection between NL and Dementia, implying that this is a potential alternative route for disease progression, although this is more intuitively considered as a linear progression from NL to MCI to Dementia. Q2. For me, the decision boundary representation best expresses the concept of the likelihood of ordering and MRI with the same caveats described above. Option B does best convey the likelihood of ordering an MRI, but doesn’t convey the information value provided by that investigation. However, my understanding is that this is not what you are aiming to convey here. | 1. What are the strengths and weaknesses of the proposed approach for understanding decision-making behavior?
2. How does the method handle partial observability and offline data?
3. What kind of data was used to evaluate the method's performance, and how did it perform?
4. Are there any concerns regarding the interpretability of the approach, and if so, how were they addressed?
5. Could you provide more detail about the counterfactual updates mentioned in Section 5?
6. Can you clarify the meaning of the visualization in Figure 3, specifically, the location of the points in the belief simplex?
7. How was the effectiveness of the approach evaluated, aside from the visualizations?
8. Were there any limitations or challenges encountered while developing and testing the method? | Review | Review
Summary: This work proposes an approach for understanding and explaining decision-making behavior. The authors aim to make the method 1) transparent, 2) able to handle partial observability, and 3) work with offline data. To do this, they develop INTERPOLE, which uses Bayesian techniques to estimate decision dynamics as well as decision boundaries. Results on simulated and real-world domains show that their method explains the decisions in behavior data while still maintaining accuracy and focuses on explaining decision dynamics rather than the “true” dynamics of the world.
Strengths:
This work tackles an interesting problem. The proposed setting that considers interpretability + partial observability + offline data is important and reflective of many real-world problems so an approach that works in this setting is very meaningful.
The paper is well-written and clear. The authors do a good job clearly stating the motivation for the work and differences from prior work.
The authors consider both simulated and real-world data. The application to healthcare is useful and interesting. Overall, the evaluation helps support the claims, except for the interpretability component described below.
Weaknesses:
My biggest concern with the work is that it argues for interpretability but the best way to evaluate interpretability is through human evaluation. I appreciate that the authors include the visualizations and talk through what each point means, but it wasn’t 100% clear whether it was truly interpretable. The best way to judge this would be to compare non-interpretable and interpretable techniques and have humans blindly rate which made more sense.
Other comments:
In Section 3 under Learning Objective, is it reasonable to assume access to the state space?
In Section 5, you talk about counterfactual updates. Can you describe this in more detail? This wasn’t super clear to me.
Adding to the weakness point above, the visualization in Figure 3 does not fully make sense. I understand the points and the arrows, but it’s not very clear whether the point’s exact location in the belief simplex is meaningful (e.g., in the middle vs on the line between MCI and dementia).
It looks like the action matching metric is used to evaluate the ability to imitate and the belief/policy mismatch is used for evaluating explainability. But belief/policy mismatch cannot be used for ADNI, so for this domain, there’s no proper evaluation of explainability other than the visualizations. I think this is a key piece that needs to be evaluated and shown.
Recommendation: Overall, the work poses an interesting problem and solution, but the key motivation is to develop an interpretable approach, which I don’t think is sufficiently and correctly evaluated. Thus, I’m on the fence and would like to hear from the authors about this point.
=========================
Response after author rebuttal: The authors answered my concern about evaluating the interpretability of the approach. They evaluated the method with a few clinicians, and I'm glad to see that they preferred the authors' method.
Adding these results + clarifying the points I included will definitely make the paper stronger. I increase my score as a result and recommend acceptance. |
ICLR | Title
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning
Abstract
Understanding human behavior from observed data is critical for transparency and accountability in decision-making. Consider real-world settings such as healthcare, in which modeling a decision-maker’s policy is challenging—with no access to underlying states, no knowledge of environment dynamics, and no allowance for live experimentation. We desire learning a data-driven representation of decisionmaking behavior that (1) inheres transparency by design, (2) accommodates partial observability, and (3) operates completely offline. To satisfy these key criteria, we propose a novel model-based Bayesian method for interpretable policy learning (“INTERPOLE”) that jointly estimates an agent’s (possibly biased) belief-update process together with their (possibly suboptimal) belief-action mapping. Through experiments on both simulated and real-world data for the problem of Alzheimer’s disease diagnosis, we illustrate the potential of our approach as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
1 INTRODUCTION
A principal challenge in modeling human behavior is in obtaining a transparent understanding of decision-making. In medical diagnosis, for instance, there is often significant regional and institutional variation in clinical practice [1], much of it the leading cause of rising healthcare costs [2]. The ability to quantify different decision processes is the first step towards a more systematic understanding of medical practice. Purely by observing demonstrated behavior, our principal objective is to answer the question: Under any given state of affairs, what actions are (more/less) likely to be taken, and why?
We address this challenge by setting our sights on three key criteria. First, we desire a method that is transparent by design. Specifically, a transparent description of behavior should locate the factors that contribute to individual decisions, in a language readily understood by domain experts [3, 4]. This will be clearer per our subsequent formalism, but we can already note some contrasts: Classical imitation learning—popularly by reduction to supervised classification—does not fit the bill, since black-box hidden states of RNNs are rarely amenable to meaningful interpretation. Similarly, apprenticeship learning algorithms—popularly through inverse reinforcement learning—do not satisfy either, since the high-level nature of reward mappings is not informative as to individual actions observed in the data. Rather than focusing purely on replicating actions (imitation learning) or on matching expert performance (apprenticeship learning), our chief pursuit lies in understanding demonstrated behavior.
Second, real-world environments such as healthcare are often partially observable in nature. This requires modeling the accumulation of information from entire sequences of past observations—an endeavor that is prima facie at odds with the goal of transparency. For instance, in a fully-observable setting, (model-free) behavioral cloning is arguably ‘transparent’ in providing simple mappings of states to actions; however, coping with partial observability using any form of recurrent function ∗Authors contributed equally
approximation immediately lands in black-box territory. Likewise, while (model-based) methods have been developed for robotic control, their transparency crucially hinges on fully-observable kinematics.
Finally, in realistic settings it is often impossible to experiment online—especially in high-stakes environments with real products and patients. The vast majority of recent work in (inverse) reinforcement learning has focused on games, simulations, and gym environments where access to live interaction is unrestricted. By contrast, in healthcare settings the environment dynamics are neither known a priori, nor estimable by repeated exploration. We want a data-driven representation of behavior that is learnable in a completely offline fashion, yet does not rely on knowing/modeling any true dynamics.
Contributions Our contributions are three-fold. First, we propose a model for interpretable policy learning (“INTERPOLE”)—where sequential observations are aggregated through a decision agent’s decision dynamics (viz. subjective belief-update process), and sequential actions are determined by the agent’s decision boundaries (viz. probabilistic belief-action mapping). Second, we suggest a Bayesian learning algorithm for estimating the model, simultaneously satisfying the key criteria of transparency, partial observability, and offline learning. Third, through experiments on both simulated and real-world data for Alzheimer’s disease diagnosis, we illustrate the potential of our method as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
2 RELATED WORK
We seek to learn an interpretable parameterization of observed behavior to understand an agent’s actions. Fundamentally, this contrasts with imitation learning (which seeks to best replicate demonstrated policies) and apprenticeship learning (which seeks to match some notion of performance).
Imitation Learning In fully-observable settings, behavior cloning (BC) readily reduces the imitation problem to one of supervised classification [5, 11–13]; i.e. actions are simply regressed on observations. While this can be extended to account for partial observability by parameterizing policies via recurrent function approximation [14], it immediately gives up on ease of interpretability per the black-box nature of RNN hidden states. A plethora of model-free techniques have recently been developed, which account for information in the rollout dynamics of the environment during policy learning (see e.g. [15–20])—most famously, generative adversarial imitation learning (GAIL) based on statedistribution matching [6, 21]. However, such methods require repeated online rollouts of intermediate policies during training, and also face the same black-box problem as BC in partially observable settings. Clearly in model-free imitation, it is difficult to admit both transparency and partial observability.
Specifically with an eye on explainability, Info-GAIL [22, 23] proposes an orthogonal notion of “interpretability” that hinges on clustering similar demonstrations to explain variations in behavior. However, as with GAIL it suffers from the need for live interaction for learning. Finally, several model-based techniques for imitation learning (MB-IL) have been studied in the domain of robotics. [24] consider kinematic models designed for robot dynamics, while [25] and [7] consider (non-)linear autoregressive exogenous models. However, such approaches invariably operate in fully-observable settings, and are restricted models hand-crafted for specific robotic applications under consideration.
Apprenticeship Learning In subtle distinction to imitation learning, methods in apprenticeship learning assume the observed behavior is optimal with respect to some underlying reward function. Apprenticeship thus proceeds indirectly—often through inverse reinforcement learning (IRL) in order to infer a reward function, which (with appropriate optimization) generates learned behavior that matches the performance of the original—as measured by the rewards (see e.g. [8, 26–29]). These approaches have been variously extended to cope with partial observability (PO-IRL) [9, 30], to offline settings through off-policy evaluation [31–33], as well as to learned environment models [10].
However, a shortcoming of such methods is the requirement that the demonstrated policy in fact be optimal with respect to a true reward function that lies within an (often limited) hypothesis class under consideration—or is otherwise black-box in nature. Further, learning the true environment dynamics [10] corresponds to the requirement that policies be restricted to the class of functions that map from unbiased beliefs (cf. exact inference) into actions. Notably though, [34] considers both a form of suboptimality caused by time-inconsistent agents as well as biased beliefs. However, perhaps most importantly, due to the indirect, task-level nature of reward functions, inverse reinforcement learning is essentially opposed to our central goal of transparency—that is, in providing direct, action-level descriptions of behavior. In Section 5, we provide empirical evidence of this notion of interpretability.
Towards INTERPOLE In contrast, we avoid making any assumptions as to either unbiasedness of beliefs or optimality of policies. After all, the former requires estimating (externally) “true” environment dynamics, and the latter requires specifying (objectively) “true” classes of reward functions— neither of which are necessary per our goal of transparently describing individual actions. Instead, INTERPOLE simply seeks the most plausible explanation in terms of (internal) decision dynamics and (subjective) decision boundaries. To the best of our knowledge, our work is the first to tackle all three key criteria—while making no assumptions on the generative process behind behaviors. Table 1 contextualizes our work, showing typical incarnations of related approaches and their graphical models.
Before continuing, we note that the separation between the internal dynamics of an agent and the external dynamics of the environment has been considered in several other works, though often for entirely different problem formulations. Most notably, [35] tackles the same policy learning problem as we do in online, fully-observable environments but for agent’s with internal states that cannot be observed. They propose agent Markov models (AMMs) to model such environment-agent interactions. For problems other than policy learning, [36–38] also consider the subproblem of inferring an agent’s internal dynamics; however, none of these works satisfy all three key criteria simultaneously as we do.
3 INTERPRETABLE POLICY LEARNING
We first introduce INTERPOLE’s model of behavior, formalizing notions of decision dynamics and decision boundaries. In the next section, we suggest a Bayesian algorithm for model-learning from data.
Problem Setup Consider a partially-observable decision-making environment in discrete time. At each step t, the agent takes action at ∈ A and observes outcome zt ∈ Z.1 We have at our disposal an observed dataset of demonstrations D={(ai1, zi1, . . . , aiτi , z i τi)} n i=1 by an agent, τi being the length of the i-th trajectory (we shall omit indices i unless required). Denote by ht . =(a1, z1, . . . , at−1, zt−1) the observed history at the beginning of step t, where h1 . =∅. Analogously, let Ht . = (A × Z)t−1 indicate the set of all possible histories at the start of step t, where H1 . = {∅}, and let H .= ∪∞t=1Ht.
A proper policy π is a mapping π ∈ ∆(A)H from observed histories to action distributions, where π(a|h) is the probability of taking action a given h. We assume that D is generated by an agent acting according to some behavioral policy πb. The problem we wish to tackle, then, is precisely how to obtain an interpretable parameterization of πb. We proceed in two steps: First, we describe a parsimonious belief-update process for accumulating histories—which we term decision dynamics. Then, we take beliefs to actions via a probabilistic mapping—which gives rise to decision boundaries.
Decision Dynamics We model belief-updates by way of an input-output hidden Markov model (IOHMM) identified by the tuple (S,A,Z, T,O, b1), with S being the finite set of underlying states. T ∈ ∆(S)S×A denotes the transition function such that T (st+1|st, at) gives the probability of transitioning into state st+1 upon action at in state st, and O ∈ ∆(Z)A×S denotes the observation function such that O(zt|at, st+1) gives the probability of observing zt after taking action at and transitioning into state st+1. Finally, let beliefs bt ∈ ∆(S) indicate the probability bt(s) that the
1While we take it here that Z is finite, our method can easily be generalized to allow continuous observations.
environment exists in any state s ∈ S at time t, and let b1 give the initial state distribution. Note that—unlike in existing uses of the IOHMM formalism—these “probabilities” are for representing the thought process of the human, and may freely diverge from the actual mechanics of the world. To aggregate observed histories as beliefs, we identify bt(s) with P(st = s|ht)—an interpretation that leads to the recursive belief-update process (where in our problem, quantities T,O, b1 are unknown):
bt+1(s ′) ∝ ∑ s∈S bt(s)T (s ′|s, at)O(zt|at, s′) (1)
A key distinction bears emphasis: We do not require that this latter set of quantities correspond to (external) environment dynamics—and we do not obligate ourselves to recover any such notion of “true” parameters. To do so would imply the assumption that the agent in fact performs exactly unbiased inference on a perfectly known model of the environment, which is restrictive. It is also unnecessary, since our mandate is simply to model the (internal) mechanics of decision-making— which could well be generated from possibly biased beliefs or imperfectly known models of the world. In other words, our objective (see Equation 3) of simultaneously determining the most likely beliefs (cf. decision dynamics) and policies (cf. decision boundaries) is fundamentally more parsimonious.
Decision Boundaries Given decision dynamics, a policy is then equivalently a map π ∈ ∆(A)∆(S). Now, what is an interpretable parameterization? Consider the three-state example in Figure 1. We argue that a probabilistic parameterization that directly induces “decision regions” (cf. panel 1b) over the belief simplex is uniquely interpretable. For instance, strong beliefs that a patient has underlying mild cognitive impairment may map to the region where a specific follow-up test is promptly prescribed; this parameterization allows clearly locating such regions—as well as their boundaries.
Precisely, we parameterize policies in terms of |A|-many “mean” vectors that correspond to actions:
π(a|b) = e−η‖b−µa‖ 2 / ∑ a′∈A e −η‖b−µa′‖ 2 , ∑ s∈S µa(s) = 1 (2)
where η ≥ 0 is the inverse temperature, ‖ · ‖ the `2-norm, and µa ∈ R|S| the mean vector corresponding to action a ∈ A. Intuitively, mean vectors induce decision boundaries (and decision regions) over the belief space ∆(S): At any time, the action whose corresponding mean is closest to the current belief is most likely to be chosen. In particular, lines that are equidistant to the means of any pair of actions form decision boundaries between them. The inverse temperature controls the transitions between such boundaries: A larger η captures more deterministic behavior (i.e. more “abrupt” transitions), whereas a smaller η captures more stochastic behavior (i.e. “smoother” transitions). Note that the case of η = 0 recovers policies that are uniformly random, and η →∞ recovers argmax policies. A second distinction is due: The exponentiated form of Equation 2 should not be confused with typical Boltzmann [27] or MaxEnt [39] policies common in RL: These are indirect parameterizations via optimal/soft q-values, which themselves require approximate solutions to optimization problems; as we shall see in our experiments, the quality of learned policies suffers as a result. Further, using q-values would imply the assumption that the agent in fact behaves optimally w.r.t. an (objectively) “true” class of reward functions—e.g. linear—which is restrictive. It is also unnecessary, as our mandate is simply to capture their (subjective) tendencies toward different actions—which are generated from possibly suboptimal policies. In contrast, by directly partitioning the belief simplex into probabilistic “decision regions”, INTERPOLE’s mean-vector representation can be immediately explained and understood.
Learning Objective In a nutshell, our objective is to identify the most likely parameterizations T , O, b1 for decision dynamics as well as η, {µa}a∈A for decision boundaries, given the observed data:
Given: D, S,A, Z Determine: T,O, b1, η, {µa}a∈A
(3)
Next, we propose a Bayesian algorithm that finds the maximum a posteriori (MAP) estimate of these quantities. Figure 2 illustrates the problem setup.
4 BAYESIAN INTERPRETABLE POLICY LEARNING
Denote with θ .= (T,O, b1, η, {µa}a∈A) the set of parameters to be determined, and let θ be drawn from some prior P(θ). In addition, denote with D̄ = {(si1, . . . , siτi+1)} n i=1 the set of underlying (unobserved) state trajectories, such that D ∪ D̄ gives the complete (fully-observed) dataset. Then the complete likelihood—of the unknown parameters θ with respect to D∪D̄—is given by the following:
P(D, D̄|θ) = n∏ i=1 τ∏ t=1
π(at|bt[T,O, b1, ht−1])︸ ︷︷ ︸ action likelihoods
× n∏ i=1 b1(s1) τ∏ t=1
T (st+1|st, at)O(zt|at, st+1)︸ ︷︷ ︸ observation likelihoods (4)
where π(·|·) is described by η and {µa}a∈A, and each bt[·] is a function of T , O, b1, and ht−1 (Equation 1). Since we do not have access to D̄, we propose an expectation-maximization (EM)-like algorithm for maximizing the posterior P(θ|D) = P(D|θ)P(θ)/ ∫ P(D|θ)dP(θ) over the parameters:
Algorithm 1 Bayesian INTERPOLE 1: Parameters: learning rate w ∈ R+ 2: Input: dataset D = {hiτi+1} n i=1, prior P(θ)
3: Sample θ̂0 from P(θ) 4: for k = 1, 2, . . . do 5: Compute P(D̄|D, θ̂k−1) .Appendix A.1 6: Compute ∇θQ(θ;θ̂k−1) at θ̂k−1 .Appendix A.2 7: θ̂k ← θ̂k−1 + w[∇θQ(θ; θ̂k−1) +∇θ logP(θ)]θ=θ̂k−1 8: while (6) 9: θ̂ ← θ̂k−1 10: Output: MAP estim. θ̂ .= (T̂ , Ô, b̂1, η̂, {µ̂a}a∈A)
Bayesian Learning Given an initial estimate θ̂0, we iteratively improve the estimate by performing the following steps at each iteration k:
• “E-step”: Compute the expected log-likelihood of the model parameters θ given the previous parameter estimate θ̂k−1, as follows:
Q(θ;θ̂k−1) . =ED̄|D,θ̂k−1 [logP(D,D̄|θ)] = ∑ D̄ logP(D,D̄|θ)P(D̄|D,θ̂k−1) (5)
where we compute the necessary marginalizations of joint distribution P(D̄|D, θ̂k−1) by way of a forward-backward procedure (detailed procedure given in Appendix A.1).
• “M-step”: Compute a new estimate θ̂k that improves the expected log-posterior—that is, such that:
Q(θ̂k; θ̂k−1) + logP(θ̂k) > Q(θ̂k−1; θ̂k−1) + logP(θ̂k−1) (6)
subject to appropriate non-negativity and normalization constraints on parameters, which can be achieved via gradient-based methods (detailed procedure given in Appendix A.2). We stop when it becomes no longer possible to obtain a new estimate further improving the expected log-posterior —that is, when the “M-step” cannot be performed. Algorithm 1 summarizes this learning procedure.
Explaining by Imitating Recall our original mandate—to give the most plausible explanation for behavior. Two questions can be asked about our proposal to “explain by imitating”—to which we now have precise answers: One concerns explainability, and the other concerns directness of explanations.
First, what constitutes the “most plausible explanation” of behavior? Now, INTERPOLE identifies this as the most likely parameterization of that behavior using a state-based model for beliefs and policies—but otherwise with no further assumptions. In particular, we are only positing that modeling beliefs over states helps provide an interpretable description of how an agent reasons (which we do have ample evidence for2)—but we are not assuming that the environment itself takes the form of a state-based model (which is an entirely different claim). Mathematically, the complete likelihood (Equation 4) highlights the difference between decision dynamics (which help explain the agent’s behavior) and “true” environment dynamics (which we do not care about). The latter are independent of the agent, and learning them would have involved just the observation likelihoods alone. In contrast, by jointly estimating T,O, b1 with η, {µa}a∈A according to both the observation- and action-likelihoods, we are learning the decision dynamics—which in general need not coincide with the environment, but which offer the most plausible explanation of how the agent effectively reasons.
The second question is about directness: Given the popularity of the IRL paradigm, could we have simply used an (indirect) reward parameterization, instead of our (direct) mean-vector parameterization? As it turns out, in addition to the “immediate” interpretability of direct, action-level representations,
2In healthcare, diseases are often modeled in terms of states, and beliefs over disease states are eminently transparent factors that medical practitioners (i.e. domain experts) readily comprehend and reason about [40, 41].
it comes with an extra perk w.r.t. computability: While it is (mathematically) possible to formulate a similar learning problem swapping out µ for rewards, in practice it is (computationally) intractable to perform in our setting. The precise difficulty lies in differentiating through quantities π(at|bt)—which in turn depend on beliefs and dynamics—in the action-likelihoods (proofs located in Appendix B):
Proposition 1 (Differentiability with q-Parameterizations) Consider softmax policies parameterized by q-values from a reward function, such that π(a|b) = eq∗(b,a)/ ∑ a′ e q∗(b,a ′) in lieu of Equation 2. Then differentiating through log π(at|bt) terms with respect to unknown parameters θ is intractable. In contrast, INTERPOLE avoids ever needing to solve any “forward” problem at all (and therefore does not require resorting to costly—and approximate—sampling-based workarounds) for learning:
Proposition 2 (Differentiability with µ-Parameterizations) Consider the mean-vector policy parameterization proposed in Equation 2. Differentiation through the log π(at|bt) terms with respect to the unknown parameters θ is easily and automatically performed using backpropagation through time.
5 ILLUSTRATIVE EXAMPLES
Three aspects of INTERPOLE deserve empirical demonstration, and we shall highlight them in turn:
• Interpretability: First, we illustrate the usefulness of our method in providing transparent explanations of behavior. This is our primary objective here—–of explaining by imitating.
• Accuracy: Second, we demonstrate that the faithfulness of learned policies is not given up for transparency. This shows that accuracy and interpretability are not necessarily opposed.
• Subjectivity: Third, we show INTERPOLE correctly recovers underlying explanations for behavior—–even if the agent is biased. This sets us apart from other state-based algorithms.
In order to do so, we show archetypical examples to exercise our framework, using both simulated and real-world experiments in the context of disease diagnosis. State-based reasoning is prevalent in research and practice: three states in progressive clinical dementia [42, 43], preterminal cancer screening [44, 45], or even—as recently shown—for cystic fibrosis [46] and pulmonary disease [47].
Decision Environments For our real-world setting, we consider the diagnostic patterns for 1,737 patients during sequences of 6-monthly visits in the Alzheimer’s Disease Neuroimaging Initiative [48] database (ADNI). The state space consists of normal functioning (“NL”), mild cognitive impairment (“MCI”), and dementia. For the action space, we consider the decision problem of ordering vs. not ordering an MRI test, which—while often informative of Alzheimer’s—is financially costly. MRI outcomes are categorized according to hippocampal volume: {“avg”, “above avg”, “below avg”, “not ordered”}; separately, the cognitive dementia rating-sum of boxes (“CDR-SB”) result— which is always measured—is categorized as: {“normal”, “questionable impairment”, “mild/severe dementia”} [42]. In total, the observation space therefore consists of the 12 combinations of outcomes. We also consider a simulated setting to better validate performance. For this we employ a diagnostic environment (DIAG) in the form of an IOHMM with certain (true) parameters T true, Otrue, btrue1 . Patients fall within diseased (s+) and healthy (s−) states, and vital-sign measurements available at every step are classified within positive (z+) and negative (z−) outcomes. For the action space, we consider the decision of continuing to monitor a patient (a=), or stopping and declaring a final diagnosis—and if so, a diseased (a+) or healthy (a−) declaration. If we assume agents have perfect knowledge of the true environment, then this setup is similar to the classic “tiger problem” for optimal stopping [49]. Lastly, we also consider a (more realistic) variant of DIAG where the agent’s behavior is instead generated by biased beliefs due to incorrect knowledge T,O, b1 6= T true, Otrue, btrue1 of the environment (BIAS). Importantly, this generates a testable version of real-life settings where decision-makers’ (subjective) beliefs often fail to coincide with (objective) probabilities in the world.
Benchmark Algorithms Where appropriate, we compare INTERPOLE against the following benchmarks: imitation by behavioral cloning [5] using RNNs for partial observability (R-BC); Bayesian IRL on POMDPs [9] equipped with a learned environment model (PO-IRL); a fully-offline counterpart [10] of Bayesian IRL (Off. PO-IRL); and an adaptation of model-based imitation learning [7] to partially-observable settings, with a learned IOHMM as the model (PO-MB-IL). Algorithms requiring learned models for interaction are given IOHMMs estimated using conventional methods [50]. Further information on environments and benchmark implementations is found in Appendix C.
Interpretability First, we direct attention to the potential utility of INTERPOLE as an investigative device for auditing and quantifying individual decisions. Specifically, modeling the evolution of an agent’s beliefs provides a concrete basis for analyzing the corresponding sequence of actions taken:
• Explaining Trajectories. Figure 3 shows examples of such decision trajectories for four real ADNI patients. Each vertex of the belief simplex corresponds to one of the three stable diagnoses, and each point in the simplex corresponds to a unique belief (i.e. probability distribution). The closer the point is to a vertex (i.e. state), the higher the probability assigned to that state. For instance, if the belief is located exactly in the middle of the simplex (i.e. equidistant from all vertices), then all states are believed to be equally likely. If the belief is located exactly on a vertex (e.g. directly on top of MCI), then this corresponds to an absolutely certainty of MCI being the underlying state. Patients (a) and (b) are “typical” patients who fit well to the overall learned policy. The former is a normally-functioning patient believed to remain around the decision boundary in all visits except the first; appropriately, they are ordered an MRI during approximately half of their visits. The latter is believed to be deteriorating from MCI towards dementia, hence prescribed an MRI in all visits.
• Identifying Belated Diagnoses. In many diseases, early diagnosis is paramount [51]. INTERPOLE allows detecting patients who appear to have been diagnosed significantly later than they should have. Patient (c) was ordered an MRI in neither of their first two visits—despite the fact that the “typical” policy would have strongly recommended one. At a third visit, the MRI that was finally ordered led to near-certainty of cognitive impairment—but this could have been known 12 months earlier! In fact, among all ADNI patients in the database, 6.5% were subject to this apparent pattern of “belatedness”, where a late MRI is immediately followed by a jump to near-certain deterioration.
• Quantifying Value of Information. Patient (d) highlights how INTERPOLE can be used to quantify the value of a test in terms of its information gain. While the patient was ordered an MRI in all of their visits, it may appear (on the surface) that the third and final MRIs were redundant—since they had little apparent affect on beliefs. However, this is only true for the factual belief update that occurred according to the MRI outcome that was actually observed. Having access to an estimated model of how beliefs are updated in the form of decision dynamics, we can also compute counterfactual belief updates—that is belief updates that could have occurred if the MRI outcome in question were to be different. In the particular case of patient (d), the tests were in fact highly informative, since (as it happened) the patient’s CDR-SB scores were suggestive of impairment, and (in the counterfactual) the doctor’s beliefs could have potentially leapt drastically towards MCI. On the other hand, among all MRIs ordered for ADNI patients, 19% may indeed have been unnecessary (i.e. triggering apparently insignificant belief-updates both factually as well as counterfactually).
Evaluating Interpretability through Clinician Surveys
To cement the argument for interpretability, we evaluated INTERPOLE by consulting nine clinicians from four different countries (United States, United Kingdom, the Netherlands, and China) for feedback. We focused on evaluating two aspects of interpretability regarding our method:
• Decision Dynamics: Whether the proposed representation of (possibly subjective) belief trajectories are preferable to raw action-observation trajectories—that is, whether decision dynamics are a transparent way of modeling how information is aggregated by decision-makers.
• Decision Boundaries: Whether the proposed representation of (possibly suboptimal) decision boudnaries are a more transparent way of describing policies, compared with the representation of reward functions (which is the conventional approach in the policy learning literature).
For the first aspect, we presented to the participating clinicians the medical history of an example patient from ADNI represented in three ways using: only the most recent action-observation, the complete action-observation trajectory, as well as the belief trajectory as recovered by INTERPOLE. Result: All nine clinicians preferred the belief trajectories over action-observation trajectories.
For the second aspect, we showed them the policies learned from ADNI by both Off. PO-IRL and INTERPOLE, which parameterize policies in terms of reward functions and decision boundaries respectively. Result: Seven out of nine clinicians preferred the representation in terms of decision boundaries over that offered by reward functions. Further details can be found in Appendix D.
INTERPOLE 0.6± 0.1 0.8± 0.2 5.38± 1.14
Accuracy Now, a reasonable question is whether such explainability comes at a cost: By learning an interpretable policy, do we sacrifice any accuracy? To be precise, we can ask the following questions:
• Is the belief-update process the same? For this, the appropriate metric is the discrepancy with respect to the sequence of beliefs—which we take to be ∑ tDKL(bt‖b̂t) (Belief Mismatch).
• Is the belief-action mapping the same? Our metric is the discrepancy with respect to the policy distribution itself—which we take to be∑ tDKL(πb(·|bt)‖π̂(·|b̂t)) (Policy Mismatch).
• Is the effective behavior the same? Here, the metrics are those measuring the discrepancy with respect to ground-truth actions observed (Action-Matching) for ADNI, and differences in stopping (Stopping Time Error) for DIAG.
Note that the action-matching and stopping time errors evaluate the quality of learned models in imitating per se, whereas belief mismatch and policy mismatch evaluate their quality in explaining.3
The results are revealing, if not necessarily surprising. To begin, we observe for the ADNI setting in Table 2 that INTERPOLE performs first- or second-best across all three action-matching based metrics; where it comes second, it does so only by a small margin to R-BC (bearing in mind that R-BC is specifically optimized for nothing but action-matching). Similarly for the DIAG setting, we observe in Table 3 that INTERPOLE performs the best in terms of stopping-time error. In other words, it appears that little—if any—imitation accuracy is lost by using INTERPOLE as the model.
Perhaps more interestingly, we also see in Table 3 that the quality of internal explanations is superior— in terms of both belief mismatch and policy mismatch. In particular, even though the comparators PO-MB-IL, PO-IRL, and Off. PO-IRL are able to map decisions through beliefs, they inherit the conventional approach of attempting to estimate true environment dynamics, which is unnecessary— and possibly detrimental—if the goal is simply to find the most likely explanation of behavior. Notably, while the difference in imitation quality among the various benchmarks is not tremendous, with respect to explanation quality the gap is significant—where INTERPOLE has great advantage.
Subjectivity Most significantly, we now show that INTERPOLE correctly recovers the underlying explanations, even if—or perhaps especially if—the agent is driven by subjective reasoning (i.e. with biased beliefs). This aspect sets INTERPOLE firmly apart from the alternative state-based techniques.
3Belief/policy mismatch are not applicable to ADNI since we have no access to ground-truth beliefs/policies.
INTERPOLE 3.8± 0.34 1.8± 0.6 2.14± 0.51
Consider the BIAS environment: Here the true environment dynamics are unchanged from DIAG, but no longer coincide with the agent’s decision dynamics. Specifically, we let the behavioral policy be generated using erroneous parameters O(z−|a=, s+) < Otrue(z−|a=, s+); that is, the doctor now incorrectly believes the test to have a smaller false-negative rate than it does in reality —thus biasing their beliefs regarding patient states.
Now suppose we wish to recover the decision boundary from the demonstrations—that is, at what confidence threshold does a doctor commit to a healthy diagnosis? Figure 4 shows that both INTERPOLE and PO-MB-IL appear to recover the correct effective behavior: Starting from a neutral prior, doctors tend to stop and issue a “healthy” diagnosis if the first two observations return negative signals. Importantly, INTERPOLE also correctly locates the confidence target∼ 90%. On the other hand, PO-MB-IL—which first attempts to estimate the environment’s true parameters—ends up learning a policy on the basis of miscalibrated beliefs, thereby incorrectly explaining the same effective behavior with a lower confidence target ∼ 70%. Finally, through similarly benchmarked metrics, Table 4 further confirms INTERPOLE’s advantage in providing the (correct) explanations.
6 DISCUSSION
Three points deserve brief comment in closing. First, while we gave prominence to several examples of how INTERPOLE may be used to audit and improve decision-making behavior, the potential applications are not limited to these use cases. Broadly, the chief proposition is that having quantitative descriptions of belief trajectories provides a concrete language that enables investigating observed actions, including outliers and variations in behavior: On that basis, different statistics and post-hoc analyses can be performed on top of INTERPOLE’s explanations (see Appendix C for examples).
Second, a reasonable question is whether or not it is reasonable to assume access to the state space. For this, allow us to reiterate a subtle distinction. There may be some “ground-truth” external state space that is arbitrarily complex or even impossible to discover, but—as explained—we are not interested in modeling this. Then, there is the internal state space that an agent uses to reason about decisions, which is what we are interested in. In this sense, it is certainly reasonable to assume access to the state space, which is often very clear from medical literature [42–47]. Since our goal is to obtain interpretable representations of decision, it is therefore reasonable to cater precisely to these accepted state spaces that doctors can most readily reason with. Describing behavior in terms of beliefs over these (already well-understood) states is one of the main contributors to the interpretability of our method.
Finally, it is crucial to keep in mind that INTERPOLE does not claim to identify the real intentions of an agent: humans are complex, and rationality is—of course—bounded. What it does do, is to provide an interpretable explanation of how an agent is effectively behaving, which—–as we have seen for diagnosis of ADNI patients—–offers a yardstick by which to assess and compare trajectories and subgroups. In particular, INTERPOLE achieves this while adhering to our key criteria for healthcare settings, and without imposing assumptions of unbiasedness or optimality on behavioral policies.
ACKNOWLEDGMENTS
This work was supported by the US Office of Naval Research (ONR) and Alzheimer’s Research UK (ARUK). We thank the clinicians who participated in our survey, the reviewers for their valuable feedback, and the Alzheimer’s Disease Neuroimaging Initiative for providing the ADNI dataset.
A DETAILS OF THE ALGORITHM
A.1 FORWARD-BACKWARD PROCEDURE
We compute the necessary marginalizations of the joint distribution P(D̄|D, θ̂) using the forwardbackward algorithm. Letting xt:t′ = {xt, xt+1, . . . , xt′} for any time-indexed quantity xt, the forward messages are defined as αt(s) = P(st = s,a1:t−1, z1:t−1|θ̂), which can be computed dynamically as
αt+1(s ′) = P(st+1 = s′,a1:t, z1:t|θ̂) = ∑ s∈S P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
= ∑ s∈S αt(s)π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)
∝ ∑ s∈S αt(s)T̂ (s ′|s, at)Ô(zt|at, s′)
with initial case α1(s) = P(s1 = s) = b1(s). The backward messages are defined as βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂), which can also be computed dynamically as
βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂) = ∑ s′∈S P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂)
= ∑ s′∈S π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
∝ ∑ s′∈S T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
with initial case βτ+1(s) = P(∅|sτ+1 = s,a1:τ , z1:τ , θ̂) = 1.
Then, the marginal probability of being in state s at time t given the dataset D and the estimate θ̂ can be computed as
γt(s) = P(st = s|D, θ̂) = P(st = s|a1:τ , z1:τ , θ̂) ∝ P(st = s,a1:τ , z1:τ |θ̂) = αt(s)β(s)
and similarly, the marginal probability of transitioning from state s to state s′ at the end of time t given the dataset D and the estimate θ̂ can be computed as
ξt(s, s ′) = P(st = s, st+1 = s′|D, θ̂) ∝ P(st = s, st+1 = s′,a1:τ , z1:τ |θ̂) = P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
× P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂) = αt(s)π̂(at|bt)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) ∝ αt(s)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) .
A.2 GRADIENT-ASCENT PROCEDURE
Taking the gradient of the expected log-likelihood Q(θ; θ̂) in (5) with respect to the unknown parameters θ = (T,O, b1, η, µa∈A) first requires computing the Jacobian matrix ∇btbt′ for 1 ≤
t < t′ ≤ τ , where (∇bb′)ij = ∂b′(i)/∂b(j) for i, j ∈ S. This can be achieved dynamically as ∇btbt′ = ∇bt+1bt′∇btbt+1 with initial case∇bt′ bt′ = I , where
(∇btbt+1)ij = ∂bt+1(i)
∂bt(j)
= ∂
∂bt(j)
[ ∑ x∈S bt(x)T (i|x, at)O(zt|at, i)∑
x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′) ] =
T (i|j, at)O(zt|at, i)∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′)
− ∑ x′∈S T (x
′|j, at)O(zt|at, x′) ( ∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′))2 .
A.2.1 PARTIAL DERIVATIVES
The derivative of Q(θ; θ̂) with respect to T (s′|s, a) is
∂Q(θ; θ̂)
∂T (s′|s, a) =
∂
∂T (s′|s, a) n∑ i=1 [ τ∑ t=1 I{at = a} ∑ x∈S ∑ x′∈S ξt(x, x ′) log T (x′|x, a)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a) + τ∑ t=2 ∂ log π(at|bt) ∂T (s′|s, a) ]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇T (s′|s,a)bt′+1
] ,
where
(∇bt log π(at|bt))1j = ∂ log(at|bt) ∂bt(j)
= ∂
∂bt(j)
( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 (bt(j)− µa(j))
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A π(a|bt)(bt(j)− µa(j))
and
(∇T (s′|s,a)bt′+1)i1 = ∂bt′+1(i)
∂T (s′|s, a)
= ∂
∂T (s′|s, a)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a} ( I{i = s′}bt′(s)O(zt′ |a, s′)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′)
− bt ′(s)O(zt′ |a, s′) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to O(z|a, s′) is
∂Q(θ; θ̂)
∂O(z|a, s′) =
∂
∂O(z|a, s′) n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} ∑ x′∈S γt+1(x ′) logO(z|a, x′)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′) + τ∑ t=2 ∂ log π(at|bt) ∂O(z|a, s′) ]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇O(z|a,s′)bt′+1
] ,
where
(∇O(z|a,s′)bt′+1)i1 = ∂bt′+1(i)
∂O(z|a, s′)
= ∂
∂O(z|a, s′)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a, zt′ = z} ( I{i = s′}∑x∈S bt′(x)T (s′|x, a)∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′)
− ∑ x∈S bt′(x)T (s
′|x, a) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to b1(s) is
∂Q(θ; θ̂)
∂b1(s) =
∂
∂b1(s) n∑ i=1 [∑ x∈S γ1(x) log b1(x) + τ∑ t=1 log π(at|bt) ]
= n∑ i=1
[ γ1(s)
b1(s) + τ∑ t=1 ∇bt log π(at|bt)∇b1(s)bt
] ,
where (∇b1(s)bt)i1 = (∇b1bt)is.
The derivative of Q(θ; θ̂) with respect to η is
∂Q(θ; θ̂)
∂η =
∂
∂η n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂η ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 ‖bt − µa‖2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A π(a|bt)‖bt − µa‖2 ) .
Finally, the derivative of Q(θ; θ̂) with respect to µa(s) is
∂Q(θ; θ̂)
∂µa(s) =
∂
∂µa(s) n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂µa(s) ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1
( 2ηI{at = a}(bt(s)− µa(s))− 2η e−η‖bt−µa‖ 2∑
a′∈A e −η‖bt−µa′‖2
(bt(s)− µa(s))
)
= n∑ i=1 τ∑ t=1 (2ηI{at = a}(bt(s)− µa(s))− 2ηπ(a|bt)(bt(s)− µa(s))
= n∑ i=1 τ∑ t=1 2η(I{at = a} − π(a|bt))(bt(s)− µa(s)) .
B PROOFS OF PROPOSITIONS
B.1 PROOF OF PROPOSITION 1
First, denote with q∗R ∈ R∆(S)×A the optimal (belief-state) q-value function with respect to the underlying (state-space) reward function R ∈ RS×A, and denote with v∗ ∈ R∆(S) the corresponding optimal value function v∗R(b) = softmaxa′∈A q ∗ R(b, a
′). Now, fix some component i of parameters θ; we wish to compute the derivative of log π(a|b) with respect to θi:
∂ ∂θi log π(a|b) = ∂ ∂θi
( q∗R(b, a)− v∗R(b) ) = ∂
∂θi
( q∗R(b, a)− log ∑ a′∈A eq ∗ R(b,a ′) )
= ∂
∂θi q∗R(b, a)− ∑ a′∈A
( eq ∗ R(b,a
′)∑ a′′∈A e q∗R(b,a ′′) · ∂ ∂θi q∗R(b, a ′)
)
= ∂
∂θi q∗R(b, a)− ∑ a′∈A π(a′|b) ∂ ∂θi q∗R(b, a ′)
= ∂
∂θi q∗R(b, a)− Ea′∼π(·|b)
[ ∂
∂θi q∗R(b, a ′) ] where we make explicit here the dependence onR, but note that it is itself a parameter; that is, R = θj for some j. We see that this in turn requires computing the partial derivative ∂q∗R(b, a)/∂θi. Let γ be some appropriate discount rate, and denote with ρR ∈ R∆(S)×A the effective (belief-state) reward ρR(b, a) . = ∑ s∈S b(s)R(s, a) corresponding to R. Further, let
P(b′|b, a) = ∑ z∈Z P(z|b, a)P(b′|b, a, z)
= ∑ z∈Z (∑ s∈S ∑ s′∈S b(s)T (s′|s, a)O(z|a, s′) ) δ ( b′ − ∑ s∈S b(s)T (·|s, a)O(z|a, ·)∑ s∈S ∑ s′∈S b(s)T (s ′|s, a)O(z|a, s′) ) denote the (belief-state) transition probabilities induced by T and O, where δ is the Dirac delta function such that δ(b′) integrates to one if and only if b′ = 0 is included in the integration region. Then the partial ∂q∗R(b, a)/∂θi is given as follows:
∂
∂θi q∗R(b, a) =
∂
∂θi
( ρR(b, a) + γ ∫ b′∈∆(S) P(b′|b, a)v∗R(b′)db′ )
= ∂
∂θi ρR(b, a) + γ ∫ b′∈∆(S) v∗R(b ′) ∂
∂θi P(b′|b, a)db′︸ ︷︷ ︸
ρR,i(b,a)
+ γ ∫ b′∈∆(S) P(b′|b, a)Ea′∼π(·|b′) [ ∂ ∂θi q∗R(b ′, a′) ] db′
from which we observe that ∂q∗R(b, a)/∂θi is a fixed point of a certain Bellman-like operator. Specifically, fix any function f ∈ R∆(S)×A; then ∂q∗R(b, a)/∂θi is the fixed point of the operator
T πR,i : R∆(S)×A → R∆(S)×A defined as follows:
(T πR,if)(b, a) = ρR,i(b, a) + γ ∫ b′∈∆(S) P(b′|b, a) ∑ a′ π(a′|b′)f(b′, a′)db′
which takes the form of a “generalized” Bellman operator on q-functions for POMDPs, where for brevity here we have written ρR,i(b, a) to denote the expression ∂∂θi ρR(b, a) + γ ∫ b′∈∆(S) v ∗ R(b ′) ∂∂θiP(b
′|b, a)db′. Mathematically, this means that a recursive procedure can in theory be defined—cf. “∇q-iteration”, analogous to q-iteration; see e.g. [52]—that may converge on the gradient under appropriate conditions. Computationally, however, this also means that taking a single gradient is at least as hard as solving POMDPs in general.
Further, note that while typical POMDP solvers operate by taking advantage of the convexity property of ρR(b, a)—see e.g. [53]—here there is no such property to make use of: In general, it is not the case that ρR,i(b, a) is convex. To see this, consider the following counterexample: Let S . = {s−, s+},A . = {a=}, Z . = {z−, z+}, T (s−|s−, a=) = T (s+|s+, a=) = p = 1, O(z−|a=, s−) = O(z+|a=, s+) = 1/4, b1(s+) = 1/2, R(s−, a=) = 0, R(s+, a=) = 1/2, and γ = 1/2. For simplicity, we will simply write b instead of b(s+). Note that:
q∗R(b, a=) = b ∑ t=0 γtR(s+, a=) + (1− b) ∑ t=0 γtR(s−, a=) = b
v∗R(b) = log ∑
a∈{a=}
eq ∗ R(b,a) = log eq ∗ R(b,a=) = q∗R(b, a=) = b
P(z+|b, a=) = 1 4 (bp+ (1− b)(1− p)) + 3 4 (b(1− p) + (1− b)p) = 1 4 b+ 3 4 (1− b) P(z−|b, a=) = 1
4 (b(1− p) + (1− b)p) + 3 4 (bp+ (1− b)(1− p)) = 1 4 (1− b) + 3 4 b
b′|b, a=, z+ = P(s′ = s+, z+|b, a=)
P(z+|b, a=) =
1 4bp+ 3 4 (1− b)(1− p)
P(z+|b, a=) =
1 4b
1 4b+ 3 4 (1− b)
b′|b, a=, z− = P(s′ = s+, z−|b, a=)
P(z−|b, a=) =
1 4 (1− b)(1− p) + 3 4bp
P(z−|b, a=) =
3 4b
1 4 (1− b) + 3 4b
P(b′|b, a=) = P(z+|b, a=) if b′ = b′|b, a=, z+ P(z−|b, a=) if b′ = b′|b, a=, z− 0 otherwise
Now, let the elements of θ be ordered such that p is the i-th element, and consider ρR,i(b, a)— evaluated at p = 1:
ρR,i(b, a=) . =
∂
∂p ρR(b, a=) + γ ∫ b′∈∆(S) v∗R(b ′) ∂ ∂p P(b′|b, a=)db′
= 1
2
( v∗R(b ′|b, a=, z+) ∂
∂p P(z+|b, a=) + v∗R(b′|b, a=, z−)
∂
∂p P(z−|b, a=) ) = 1
2
( 1 4b
1 4b+ 3 4 (1− b)
( 1
4 b− 1 4 (1− b)− 3 4 b+ 3 4 (1− b) ) + 3 4b
1 4 (1− b) + 3 4b
( −1
4 b+
1 4 (1− b) + 3 4 b− 3 4 (1− b) )) Clearly ρR,i(b, a=) cannot be convex since ρR,i(1/2, a=) = 0 and ρR,i(1, a=) = 0 but ρR,i(3/4, a=) > 0.
B.2 PROOF OF PROPOSITION 2
In contrast, unlike the indirect q-value parameterization above (which by itself requires approximate solutions to optimization problems), the mean-vector parameterization of INTERPOLE maps beliefs
directly to distributions over actions. Now, the derivatives of log π(a|b) are given as closed-form expressions in Appendices A.1 and A.2.
In particular, note that each bt is computed through a feed-forward structure, and therefore can easily be differentiated with respect to the unknown parameters θ through backpropagation through time: Each time step leading up to an action corresponds to a “hidden layer” in a neural network, and the initial belief corresponds to the “features” that are fed into the network; the transition and observation functions correspond to the weights between layers, the beliefs at each time step correspond to the activations between layers, the actions themselves correspond to class labels, and the action likelihood corresponds to the loss function (see Appendices A.1 and A.2).
Finally, note that computing all of the forward-backward messages αt and βt in Appendix A.1 has complexityO(nτS2), computing all of the Jacobian matricies∇btbt′ in Appendix A.2 has complexity O(nτ2S3), and computing all of the partial derivatives given in Appendix A.2 has complexity at most O(nτ2S2AZ). Hence, fully differentiating the expected log-likelihood Q(θ; θ̂) with respect to the unknown parameters θ has an overall (polynomial) complexity O(nτ2S2 max{S,AZ}).
C EXPERIMENT PARTICULARS
C.1 DETAILS OF DECISION ENVIRONMENTS
ADNI We have filtered out visits without a CDR-SB measurement, which is almost always taken, and visits that do not occur immediately after the six-month period following the previous visit but instead occur after 12 months or later. This filtering leaves 1,626 patients with typically three consecutive visits each. For MRI outcomes, average is considered to be within half a standard deviation of the population mean. Since there are only two actions in this scenario, we have set η = 1 and relied on the distance between the two means to adjust for the stochasticity of the estimated policy—closer means being somewhat equivalent to a smaller η.
DIAG We set T true(s−|s−, ·) = T true(s+|s+, ·) = 1, meaning patients do not heal or contract the diseases as the diagnosis progresses, Otrue(z−|a=, s+) = Otrue(z+|a=, s−) = 0.4, meaning measurements as a test have a false-negative and false-positive rates of 40%, and btrue1 (s+) = 0.5. Moreover, the behavior policy is given by T = T true, O = Otrue, b1 = btrue1 , η = 10, µa=(s+) = 0.5, and µa−(s−) = µa+(s+) = 1.3. Intuitively, doctors continue monitoring the patient until they are 90% confident in declaring a final diagnosis. In this scenario, T and η are assumed to be known. The behavior dataset is generated as 100 demonstration trajectories.
BIAS We set all parameters exactly the same way we did in DIAG with one important exception: now O(s−|a=, z+) = 0.2 while it is still the case that Otrue(z−|a=, s+) = 0.4, meaning O 6= Otrue anymore. In this scenario, b1 is also assumed to be known (in addition to T and η) to avoid any invariances between b1 and O that we have encountered during training. The behavioral dataset is generated as 1000 demonstration trajectories.
C.2 DETAILS OF BENCHMARK ALGORITHMS
R-BC We train an RNN whose inputs are the observed histories ht and whose outputs are the predicted probabilities π̂(a|ht) of taking action a given the observed history ht. The network consists of an LSTM unit of size 64 and a fully-connected hidden layer of size 64. We minimize the crossentropy loss L = − ∑n i=1 ∑τ t=1 ∑ a∈A I{at = a} log π̂(a|ht) using Adam optimizer with learning rate 0.001 until convergence, that is when the cross-enropy loss does not improve for 100 consecutive iterations.
PO-IRL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. The reward parameter R is initialized as R̂0(s, a) = εs,a where εs,a ∼ N (0, 0.0012). Then, it is estimated via Markov chain Monte Carlo (MCMC) sampling, during which new candidate samples are generated by adding Gaussian noise with standard deviation 0.001 to the last sample. A final estimate is formed by averaging every tenth sample among the second set of 500 samples, ignoring the first 500 samples. In order to compute optimal q-values, we have used an off-the-shelf POMDP solver available at https://www.pomdp.org/code/index.html.
Off. PO-IRL All parameters are initialized exactly the same way as in PO-IRL. Then, both the IOHMM parameters T , O, and b1, and the reward parameter R are estimated jointly via MCMC sampling. When generating new candidate samples, with equal probabilities, we have either sampled new T , O, and b1 from IOHMM posterior (without changing R) or obtained a new R the same way we did in PO-IRL (without changing T , O, and b1). A final estimate is formed the same way as in PO-IRL.
PO-MB-IL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. Given the IOHMM parameters, we parameterized policies the same way we did in INTERPOLE, that is as described in (2). The policy parameters {µa}a∈A are initialized as µ̂0a(s) = (1/|S| + εa,s)/ ∑ s′∈S(1/|S| + εa,s′) where εa,s′ ∼ N (0, 0.0012). Then, they are estimated according solely to the action likelihoods in (4) using the EM algorithm. The expected log-posterior is maximized using Adam optimizer with learning rate 0.001 until convergence, that is when the expected log-posterior does not improve for 100 consecutive iterations.
INTERPOLE All parameters are initialized exactly the same way as in PO-MB-IL. Then, the IOHMM parameters T , O, and b1, and the policy parameters {µa}a∈A are estimated jointly according to both the action likelihoods and the observation likelihoods in (4). The expected log-posterior is again maximized using Adam optimizer with learning rate 0.001 until convergence.
C.3 FURTHER EXAMPLE: POST-HOC ANALYSES
Policy representations learned by INTEPOLE provide users with means to derive concrete criteria that describe observed behavior in objective terms. These criteria, in turn, enable the quantitative analyses of the behavior using conventional statistical methods. For ADNI, we have considered two such criteria: belatedness of individual diagnoses and informativeness of individual tests. Both of these criteria are relevant to the discussion of early diagnosis, which is paramount for Alzheimer’s disease [51] as we have already mentioned during the illustrative examples.
Formally, we consider the final diagnoses of a patient to be belated if (i) the patient was not ordered an MRI in one of their visits despite the fact that an MRI being ordered was the most likely outcome according to the policy estimated by INTERPOLE and (ii) the patient was ordered an MRI in a later visit that led to a near-certain diagnosis with at least 90% confidence according to the underlying beliefs estimated by INTERPOLE. We consider An MRI to be uninformative if it neither (factually) caused nor could have (counterfactually) caused a significant change in the underlying belief-state of the patient, where an insignificant change is half a standard deviation less than the mean factual change in beliefs estimated by INTERPOLE.
Having defined belatedness and informativeness, one can investigate the frequency of belated diagnoses and uninformative MRIs in different cohorts of patients to see how practice varies between one cohort to another. In Table 5, we do so for six cohorts: all of the patients, patients who are over 75 years old, patients with apoE4 risk factor for dementia, patients with signs of MCI or dementia since their very first visit, female patients, and male patients. Note that increasing age, apoE4 allele, and female gender are known to be associated with increased risk of Alzheimer’s disease [54–57]. For instance, we see that uninformative MRIs are much more prevalent among patients with signs of MCI or dementia since their first visit. This could potentially be because these patients are monitored much more closely than usual given their condition.
Alternatively, one can divide patients into cohorts based on whether they have a belated diagnoses or an uninformative MRI to see which features these criteria correlate with more. We do so in Table 6. For instance, we see that a considerable percentage of belated diagnoses are seen among male patients.
C.4 FURTHER EXAMPLE: DECISION TREES
Clinical practice guidelines are often given in the form of decision trees, which usually have vague elements that require the judgement of the practitioner [58, 59]. For example, the guideline could ask the practitioner to quantify risks, side effects, or improvements in subjective terms such as being significant, serious, or potential. Using direct policy learning, how vague elements like these are commonly resolved in practice can be learned in objective terms.
Formulating policies in terms of IOHMMs and decision boundaries is expressive enough to model decision trees. An IOHMM with deterministic observations, that is O(z|a, s′) = 1 for some z ∈ Z and for all a ∈ A, s ∈ S, essentially describes a finite-state machine, inputs of which are equivalent to the observations. Similarly, a deterministic decision tree can be defined as a finite-state machine with no looping sequence of transitions. The case where the observations are probabilistic rather than deterministic correspond to the case where the decision tree is traversed in a probabilistic way so that each path down the tree has a probability associated with it at each step of the traversal.
As a concrete example of modeling decision trees in terms of IOHMMs, consider the scenario of diagnosing a disease with two sub-types: Disease-A and Disease-B. Figure 5a depicts the policy of the doctors in the form of a decision tree. Each newly-arriving patient is first tested for the disease in a general sense without any distinction between the two sub-types it has. The patient is then tested
for a specific sub-type of the disease only if the doctors deem there is a significant risk that the patient is diseased. Note that which exact level of confidence constitutes as a significant risk is left vague in the decision tree. By modeling this scenario using our framework, we can learn: (i) how the risk is determined based on initial test results and (ii) what amount of risk is considered significant enough to require a subsequent test for the sub-type.
Let S = {INI, HLT, DIS, DSA, DSB}, where INI denotes that the patient has newly arrived, HLT denotes that the patient is healthy, DIS denotes that the patient is diseased, DSA denotes that the patient has Disease-A, and DSB denotes that the patient has Disease-B. Figure 5b depicts the state space S with all possible transitions. Note that the initial belief b1 is such that b1(INI) = 1. Let A = {TST-DIS, TST-TYP, STP-HLT, STP-DSA, STP-DSB}, where TST-DIS denotes testing for the disease, TST-TYP denotes testing for the sub-type of the disease, and the remaining actions denote stopping and diagnosing the patient with one of the terminal states, namely states HLT, DSA, and DSB.
After taking action a1 = TST-DIS and observing some initial test result z1 ∈ Z, the risk of disease, which is the probability that the patient is diseased, can be calculated with a simple belief update:
b2(DIS) ∝ ∑ s∈S b1(s)T (DIS|s, TST-DIS)O(z1|TST-DIS, DIS)
= T (DIS|INI, TST-DIS)O(z1|TST-DIS, DIS) . Moreover, we can say that the doctors are more likely to test for the sub-type of the disease as opposed to stopping and diagnosing the patient as healthy, that is πb(TST-TYP|b2) > πb(STP-HLT|b2), when
b2(DIS) > µTST-TYP(DIS) + µSTP-HLT(DIS)
2
assuming µTST-TYP(DIS) > µSTP-HLT(DIS). Note that there are only two possible actions at the second time step: actions TST-TYP and STP-HLT.
D DETAILS OF THE CLINICIAN SURVEYS
Each participant was provided a short presentation explaining (1) the ADNI dataset and the decisionmaking problem we consider, (2) what rewards and reward functions are, (3) what beliefs and belief simplices are, and (4) how policies can be represented in terms of reward functions as well as decision boundaries. Then, they were asked two multiple-choice questions, one that is strictly about representing histories, and one that is strictly about representing policies. Importantly, the survey was conducted blindly—i.e. they were given no context whatsoever as pertains this paper and our proposed method. The question slides can be found in Figures 6 and 7. Essentially, each question first states a hypothesis and shows two/three representations relevant to the hypothesis stated. Then, the participant is asked which of the representations shown most readily expresses the hypothesis. Here are the full responses that we have received, which includes some additional feedback:
• Clinician 1 Question 1: C > B > A Question 2: B Additional Feedback: The triangle was initially more confusing than not, but the first example (100% uncertainty) was helpful. It isn’t clear how the dots in the triangle are computed. Are these probabilities based on statistics? Diagram is always better than no diagram.
• Clinician 2 Question 1: C > B > A Question 2: B Additional Feedback: I always prefer pictures to tables, they are much easier to understand.
• Clinician 3 Question 1: C > B > A Question 2: B Additional Feedback: Of course the triangle is more concise and easier to look at. But how is the decision boundary obtained? Does the decision boundary always have to be parallel to one of the sides of the triangle?
• Clinician 4 Question 1: C > B > A Question 2: B Additional Feedback: [Regarding Question 1,] representation A and B do not show any interpretation of the diagnostic test results, whereas representation C does. I think doctors are most familiar with representation B, as it more closely resembles the EHR. Although representation C is visually pleasing, I’m not sure how the scale of the sides of the triangle should be interpreted. [Regarding Question 2,] again I like the triangle, but it’s hard to interpret what the scale of the sides of the triangle mean. I think option A is again what doctors are more familiar with.
• Clinician 5 Question 1: C > B > A Question 2: A
• Clinician 6 Question 1: C > B > A Question 2: A
• Clinician 7 Question 1: C Question 2: B Additional Feedback: I thought I’d share with you my thoughts on the medical aspects in your scenario first (although I realise you didn’t ask me for them). [...] The Cochrane review concludes that MRI provides low sensitivity and specificity and does not qualify it as an add on test for the early diagnosis due to dementia (Lombardi G et. al. Cochrane database 2020). The reason for MRI imaging is (according to the international guidelines) to exclude non-degenerative or surgical causes of cognitive impairment. [...] In your example the condition became apparent when the CDR-SB score at Month 24 hit 3.0 (supported by the sequence of measurements over time showing worsening CDR-SB score). I imagine the MRI was triggered by slight worsening in the CDR-SB score (to exclude an alternative diagnosis). To answer your specific questions: Q1. The representation C describes your (false) hypothesis that it was the MRI that made the diagnosis of MCI more likely/apparent the best—I really like the triangles. Q2. I really like the decision boundary.
• Clinician 8 Question 1: C > B > A Question 2: B
• Clinician 9 Question 1: C Question 2: B Additional Feedback: Q1. Representation C gives the clearest illustration of the diagnostic change following MRI. However, the representation of beliefs on a continuous spectrum around discrete cognitive states could be potentially confusing given that cognitive function is itself a continuum (and ‘MCI’, ‘Dementia’ and ‘NL’ are stations on a spectrum rather than discrete states). Also, while representation C is the clearest illustration, it is the representation that conveys the least actual data and it isn’t clear from the visualisation exactly what each shift in 2D space represents. Also, the triangulation in ‘C’ draws a direct connection between NL and Dementia, implying that this is a potential alternative route for disease progression, although this is more intuitively considered as a linear progression from NL to MCI to Dementia. Q2. For me, the decision boundary representation best expresses the concept of the likelihood of ordering and MRI with the same caveats described above. Option B does best convey the likelihood of ordering an MRI, but doesn’t convey the information value provided by that investigation. However, my understanding is that this is not what you are aiming to convey here. | 1. What is the main contribution of the paper, and how does it relate to the literature on modeling decision-making?
2. What are the strengths of the proposed method, Interpole, and how does it compare to prior methods?
3. Are there any limitations or potential challenges in applying Interpole to tasks with high-dimensional latent belief states?
4. How does Interpole scale with the complexity of the demonstrator's internal models?
5. What is the significance of the three criteria of transparency by design, partial observability, and offline learning in the context of learning from demonstrations?
6. Can you provide more information about the related work mentioned in the review, specifically assistive state estimation, learning sensor models from demonstrations in the LQG setting, learning from a demonstrator with false beliefs, and inverse rational control?
7. Are there any typos or minor errors in the review that should be corrected? | Review | Review
The paper proposes an algorithm for learning policies and internal models ("decision dynamics") from demonstrations. The key idea is to fit a distribution over policies, observation models, and transition models using an EM-like method. Offline experiments on a healthcare dataset show that the method learns interpretable decision dynamics, recovers biased internal models, and accurately predicts actions relative to prior methods.
Overall, the paper is well-written, the experiments are convincing, and the proposed method, Interpole, makes a meaningful contribution to the literature on modeling decision-making.
It would be nice to evaluate Interpole on tasks in which the demonstrator's decision dynamics operate on high-dimensional latent belief states, to illustrate how Interpole scales with the complexity of the demonstrator's internal models. In such settings, would the mean-vector representation in Equation 2 become problematic due to the curse of dimensionality?
There is some missing related work on learning from demonstrations that also satisfies the three criteria of transparency by design, partial observability, and offline learning: assistive state estimation [1], learning sensor models from demonstrations in the LQG setting [2], learning from a demonstrator with false beliefs [3], and inverse rational control [4].
https://arxiv.org/pdf/2008.02840.pdf
https://fias.uni-frankfurt.de/~rothkopf/docs/Schmitt_et_al_2017.pdf
https://arxiv.org/pdf/1512.05832.pdf
https://arxiv.org/pdf/2009.12576.pdf
Typos:
"log-like-lihood" -> log-likelihood (page 5) |
ICLR | Title
Explaining by Imitating: Understanding Decisions by Interpretable Policy Learning
Abstract
Understanding human behavior from observed data is critical for transparency and accountability in decision-making. Consider real-world settings such as healthcare, in which modeling a decision-maker’s policy is challenging—with no access to underlying states, no knowledge of environment dynamics, and no allowance for live experimentation. We desire learning a data-driven representation of decisionmaking behavior that (1) inheres transparency by design, (2) accommodates partial observability, and (3) operates completely offline. To satisfy these key criteria, we propose a novel model-based Bayesian method for interpretable policy learning (“INTERPOLE”) that jointly estimates an agent’s (possibly biased) belief-update process together with their (possibly suboptimal) belief-action mapping. Through experiments on both simulated and real-world data for the problem of Alzheimer’s disease diagnosis, we illustrate the potential of our approach as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
1 INTRODUCTION
A principal challenge in modeling human behavior is in obtaining a transparent understanding of decision-making. In medical diagnosis, for instance, there is often significant regional and institutional variation in clinical practice [1], much of it the leading cause of rising healthcare costs [2]. The ability to quantify different decision processes is the first step towards a more systematic understanding of medical practice. Purely by observing demonstrated behavior, our principal objective is to answer the question: Under any given state of affairs, what actions are (more/less) likely to be taken, and why?
We address this challenge by setting our sights on three key criteria. First, we desire a method that is transparent by design. Specifically, a transparent description of behavior should locate the factors that contribute to individual decisions, in a language readily understood by domain experts [3, 4]. This will be clearer per our subsequent formalism, but we can already note some contrasts: Classical imitation learning—popularly by reduction to supervised classification—does not fit the bill, since black-box hidden states of RNNs are rarely amenable to meaningful interpretation. Similarly, apprenticeship learning algorithms—popularly through inverse reinforcement learning—do not satisfy either, since the high-level nature of reward mappings is not informative as to individual actions observed in the data. Rather than focusing purely on replicating actions (imitation learning) or on matching expert performance (apprenticeship learning), our chief pursuit lies in understanding demonstrated behavior.
Second, real-world environments such as healthcare are often partially observable in nature. This requires modeling the accumulation of information from entire sequences of past observations—an endeavor that is prima facie at odds with the goal of transparency. For instance, in a fully-observable setting, (model-free) behavioral cloning is arguably ‘transparent’ in providing simple mappings of states to actions; however, coping with partial observability using any form of recurrent function ∗Authors contributed equally
approximation immediately lands in black-box territory. Likewise, while (model-based) methods have been developed for robotic control, their transparency crucially hinges on fully-observable kinematics.
Finally, in realistic settings it is often impossible to experiment online—especially in high-stakes environments with real products and patients. The vast majority of recent work in (inverse) reinforcement learning has focused on games, simulations, and gym environments where access to live interaction is unrestricted. By contrast, in healthcare settings the environment dynamics are neither known a priori, nor estimable by repeated exploration. We want a data-driven representation of behavior that is learnable in a completely offline fashion, yet does not rely on knowing/modeling any true dynamics.
Contributions Our contributions are three-fold. First, we propose a model for interpretable policy learning (“INTERPOLE”)—where sequential observations are aggregated through a decision agent’s decision dynamics (viz. subjective belief-update process), and sequential actions are determined by the agent’s decision boundaries (viz. probabilistic belief-action mapping). Second, we suggest a Bayesian learning algorithm for estimating the model, simultaneously satisfying the key criteria of transparency, partial observability, and offline learning. Third, through experiments on both simulated and real-world data for Alzheimer’s disease diagnosis, we illustrate the potential of our method as an investigative device for auditing, quantifying, and understanding human decision-making behavior.
2 RELATED WORK
We seek to learn an interpretable parameterization of observed behavior to understand an agent’s actions. Fundamentally, this contrasts with imitation learning (which seeks to best replicate demonstrated policies) and apprenticeship learning (which seeks to match some notion of performance).
Imitation Learning In fully-observable settings, behavior cloning (BC) readily reduces the imitation problem to one of supervised classification [5, 11–13]; i.e. actions are simply regressed on observations. While this can be extended to account for partial observability by parameterizing policies via recurrent function approximation [14], it immediately gives up on ease of interpretability per the black-box nature of RNN hidden states. A plethora of model-free techniques have recently been developed, which account for information in the rollout dynamics of the environment during policy learning (see e.g. [15–20])—most famously, generative adversarial imitation learning (GAIL) based on statedistribution matching [6, 21]. However, such methods require repeated online rollouts of intermediate policies during training, and also face the same black-box problem as BC in partially observable settings. Clearly in model-free imitation, it is difficult to admit both transparency and partial observability.
Specifically with an eye on explainability, Info-GAIL [22, 23] proposes an orthogonal notion of “interpretability” that hinges on clustering similar demonstrations to explain variations in behavior. However, as with GAIL it suffers from the need for live interaction for learning. Finally, several model-based techniques for imitation learning (MB-IL) have been studied in the domain of robotics. [24] consider kinematic models designed for robot dynamics, while [25] and [7] consider (non-)linear autoregressive exogenous models. However, such approaches invariably operate in fully-observable settings, and are restricted models hand-crafted for specific robotic applications under consideration.
Apprenticeship Learning In subtle distinction to imitation learning, methods in apprenticeship learning assume the observed behavior is optimal with respect to some underlying reward function. Apprenticeship thus proceeds indirectly—often through inverse reinforcement learning (IRL) in order to infer a reward function, which (with appropriate optimization) generates learned behavior that matches the performance of the original—as measured by the rewards (see e.g. [8, 26–29]). These approaches have been variously extended to cope with partial observability (PO-IRL) [9, 30], to offline settings through off-policy evaluation [31–33], as well as to learned environment models [10].
However, a shortcoming of such methods is the requirement that the demonstrated policy in fact be optimal with respect to a true reward function that lies within an (often limited) hypothesis class under consideration—or is otherwise black-box in nature. Further, learning the true environment dynamics [10] corresponds to the requirement that policies be restricted to the class of functions that map from unbiased beliefs (cf. exact inference) into actions. Notably though, [34] considers both a form of suboptimality caused by time-inconsistent agents as well as biased beliefs. However, perhaps most importantly, due to the indirect, task-level nature of reward functions, inverse reinforcement learning is essentially opposed to our central goal of transparency—that is, in providing direct, action-level descriptions of behavior. In Section 5, we provide empirical evidence of this notion of interpretability.
Towards INTERPOLE In contrast, we avoid making any assumptions as to either unbiasedness of beliefs or optimality of policies. After all, the former requires estimating (externally) “true” environment dynamics, and the latter requires specifying (objectively) “true” classes of reward functions— neither of which are necessary per our goal of transparently describing individual actions. Instead, INTERPOLE simply seeks the most plausible explanation in terms of (internal) decision dynamics and (subjective) decision boundaries. To the best of our knowledge, our work is the first to tackle all three key criteria—while making no assumptions on the generative process behind behaviors. Table 1 contextualizes our work, showing typical incarnations of related approaches and their graphical models.
Before continuing, we note that the separation between the internal dynamics of an agent and the external dynamics of the environment has been considered in several other works, though often for entirely different problem formulations. Most notably, [35] tackles the same policy learning problem as we do in online, fully-observable environments but for agent’s with internal states that cannot be observed. They propose agent Markov models (AMMs) to model such environment-agent interactions. For problems other than policy learning, [36–38] also consider the subproblem of inferring an agent’s internal dynamics; however, none of these works satisfy all three key criteria simultaneously as we do.
3 INTERPRETABLE POLICY LEARNING
We first introduce INTERPOLE’s model of behavior, formalizing notions of decision dynamics and decision boundaries. In the next section, we suggest a Bayesian algorithm for model-learning from data.
Problem Setup Consider a partially-observable decision-making environment in discrete time. At each step t, the agent takes action at ∈ A and observes outcome zt ∈ Z.1 We have at our disposal an observed dataset of demonstrations D={(ai1, zi1, . . . , aiτi , z i τi)} n i=1 by an agent, τi being the length of the i-th trajectory (we shall omit indices i unless required). Denote by ht . =(a1, z1, . . . , at−1, zt−1) the observed history at the beginning of step t, where h1 . =∅. Analogously, let Ht . = (A × Z)t−1 indicate the set of all possible histories at the start of step t, where H1 . = {∅}, and let H .= ∪∞t=1Ht.
A proper policy π is a mapping π ∈ ∆(A)H from observed histories to action distributions, where π(a|h) is the probability of taking action a given h. We assume that D is generated by an agent acting according to some behavioral policy πb. The problem we wish to tackle, then, is precisely how to obtain an interpretable parameterization of πb. We proceed in two steps: First, we describe a parsimonious belief-update process for accumulating histories—which we term decision dynamics. Then, we take beliefs to actions via a probabilistic mapping—which gives rise to decision boundaries.
Decision Dynamics We model belief-updates by way of an input-output hidden Markov model (IOHMM) identified by the tuple (S,A,Z, T,O, b1), with S being the finite set of underlying states. T ∈ ∆(S)S×A denotes the transition function such that T (st+1|st, at) gives the probability of transitioning into state st+1 upon action at in state st, and O ∈ ∆(Z)A×S denotes the observation function such that O(zt|at, st+1) gives the probability of observing zt after taking action at and transitioning into state st+1. Finally, let beliefs bt ∈ ∆(S) indicate the probability bt(s) that the
1While we take it here that Z is finite, our method can easily be generalized to allow continuous observations.
environment exists in any state s ∈ S at time t, and let b1 give the initial state distribution. Note that—unlike in existing uses of the IOHMM formalism—these “probabilities” are for representing the thought process of the human, and may freely diverge from the actual mechanics of the world. To aggregate observed histories as beliefs, we identify bt(s) with P(st = s|ht)—an interpretation that leads to the recursive belief-update process (where in our problem, quantities T,O, b1 are unknown):
bt+1(s ′) ∝ ∑ s∈S bt(s)T (s ′|s, at)O(zt|at, s′) (1)
A key distinction bears emphasis: We do not require that this latter set of quantities correspond to (external) environment dynamics—and we do not obligate ourselves to recover any such notion of “true” parameters. To do so would imply the assumption that the agent in fact performs exactly unbiased inference on a perfectly known model of the environment, which is restrictive. It is also unnecessary, since our mandate is simply to model the (internal) mechanics of decision-making— which could well be generated from possibly biased beliefs or imperfectly known models of the world. In other words, our objective (see Equation 3) of simultaneously determining the most likely beliefs (cf. decision dynamics) and policies (cf. decision boundaries) is fundamentally more parsimonious.
Decision Boundaries Given decision dynamics, a policy is then equivalently a map π ∈ ∆(A)∆(S). Now, what is an interpretable parameterization? Consider the three-state example in Figure 1. We argue that a probabilistic parameterization that directly induces “decision regions” (cf. panel 1b) over the belief simplex is uniquely interpretable. For instance, strong beliefs that a patient has underlying mild cognitive impairment may map to the region where a specific follow-up test is promptly prescribed; this parameterization allows clearly locating such regions—as well as their boundaries.
Precisely, we parameterize policies in terms of |A|-many “mean” vectors that correspond to actions:
π(a|b) = e−η‖b−µa‖ 2 / ∑ a′∈A e −η‖b−µa′‖ 2 , ∑ s∈S µa(s) = 1 (2)
where η ≥ 0 is the inverse temperature, ‖ · ‖ the `2-norm, and µa ∈ R|S| the mean vector corresponding to action a ∈ A. Intuitively, mean vectors induce decision boundaries (and decision regions) over the belief space ∆(S): At any time, the action whose corresponding mean is closest to the current belief is most likely to be chosen. In particular, lines that are equidistant to the means of any pair of actions form decision boundaries between them. The inverse temperature controls the transitions between such boundaries: A larger η captures more deterministic behavior (i.e. more “abrupt” transitions), whereas a smaller η captures more stochastic behavior (i.e. “smoother” transitions). Note that the case of η = 0 recovers policies that are uniformly random, and η →∞ recovers argmax policies. A second distinction is due: The exponentiated form of Equation 2 should not be confused with typical Boltzmann [27] or MaxEnt [39] policies common in RL: These are indirect parameterizations via optimal/soft q-values, which themselves require approximate solutions to optimization problems; as we shall see in our experiments, the quality of learned policies suffers as a result. Further, using q-values would imply the assumption that the agent in fact behaves optimally w.r.t. an (objectively) “true” class of reward functions—e.g. linear—which is restrictive. It is also unnecessary, as our mandate is simply to capture their (subjective) tendencies toward different actions—which are generated from possibly suboptimal policies. In contrast, by directly partitioning the belief simplex into probabilistic “decision regions”, INTERPOLE’s mean-vector representation can be immediately explained and understood.
Learning Objective In a nutshell, our objective is to identify the most likely parameterizations T , O, b1 for decision dynamics as well as η, {µa}a∈A for decision boundaries, given the observed data:
Given: D, S,A, Z Determine: T,O, b1, η, {µa}a∈A
(3)
Next, we propose a Bayesian algorithm that finds the maximum a posteriori (MAP) estimate of these quantities. Figure 2 illustrates the problem setup.
4 BAYESIAN INTERPRETABLE POLICY LEARNING
Denote with θ .= (T,O, b1, η, {µa}a∈A) the set of parameters to be determined, and let θ be drawn from some prior P(θ). In addition, denote with D̄ = {(si1, . . . , siτi+1)} n i=1 the set of underlying (unobserved) state trajectories, such that D ∪ D̄ gives the complete (fully-observed) dataset. Then the complete likelihood—of the unknown parameters θ with respect to D∪D̄—is given by the following:
P(D, D̄|θ) = n∏ i=1 τ∏ t=1
π(at|bt[T,O, b1, ht−1])︸ ︷︷ ︸ action likelihoods
× n∏ i=1 b1(s1) τ∏ t=1
T (st+1|st, at)O(zt|at, st+1)︸ ︷︷ ︸ observation likelihoods (4)
where π(·|·) is described by η and {µa}a∈A, and each bt[·] is a function of T , O, b1, and ht−1 (Equation 1). Since we do not have access to D̄, we propose an expectation-maximization (EM)-like algorithm for maximizing the posterior P(θ|D) = P(D|θ)P(θ)/ ∫ P(D|θ)dP(θ) over the parameters:
Algorithm 1 Bayesian INTERPOLE 1: Parameters: learning rate w ∈ R+ 2: Input: dataset D = {hiτi+1} n i=1, prior P(θ)
3: Sample θ̂0 from P(θ) 4: for k = 1, 2, . . . do 5: Compute P(D̄|D, θ̂k−1) .Appendix A.1 6: Compute ∇θQ(θ;θ̂k−1) at θ̂k−1 .Appendix A.2 7: θ̂k ← θ̂k−1 + w[∇θQ(θ; θ̂k−1) +∇θ logP(θ)]θ=θ̂k−1 8: while (6) 9: θ̂ ← θ̂k−1 10: Output: MAP estim. θ̂ .= (T̂ , Ô, b̂1, η̂, {µ̂a}a∈A)
Bayesian Learning Given an initial estimate θ̂0, we iteratively improve the estimate by performing the following steps at each iteration k:
• “E-step”: Compute the expected log-likelihood of the model parameters θ given the previous parameter estimate θ̂k−1, as follows:
Q(θ;θ̂k−1) . =ED̄|D,θ̂k−1 [logP(D,D̄|θ)] = ∑ D̄ logP(D,D̄|θ)P(D̄|D,θ̂k−1) (5)
where we compute the necessary marginalizations of joint distribution P(D̄|D, θ̂k−1) by way of a forward-backward procedure (detailed procedure given in Appendix A.1).
• “M-step”: Compute a new estimate θ̂k that improves the expected log-posterior—that is, such that:
Q(θ̂k; θ̂k−1) + logP(θ̂k) > Q(θ̂k−1; θ̂k−1) + logP(θ̂k−1) (6)
subject to appropriate non-negativity and normalization constraints on parameters, which can be achieved via gradient-based methods (detailed procedure given in Appendix A.2). We stop when it becomes no longer possible to obtain a new estimate further improving the expected log-posterior —that is, when the “M-step” cannot be performed. Algorithm 1 summarizes this learning procedure.
Explaining by Imitating Recall our original mandate—to give the most plausible explanation for behavior. Two questions can be asked about our proposal to “explain by imitating”—to which we now have precise answers: One concerns explainability, and the other concerns directness of explanations.
First, what constitutes the “most plausible explanation” of behavior? Now, INTERPOLE identifies this as the most likely parameterization of that behavior using a state-based model for beliefs and policies—but otherwise with no further assumptions. In particular, we are only positing that modeling beliefs over states helps provide an interpretable description of how an agent reasons (which we do have ample evidence for2)—but we are not assuming that the environment itself takes the form of a state-based model (which is an entirely different claim). Mathematically, the complete likelihood (Equation 4) highlights the difference between decision dynamics (which help explain the agent’s behavior) and “true” environment dynamics (which we do not care about). The latter are independent of the agent, and learning them would have involved just the observation likelihoods alone. In contrast, by jointly estimating T,O, b1 with η, {µa}a∈A according to both the observation- and action-likelihoods, we are learning the decision dynamics—which in general need not coincide with the environment, but which offer the most plausible explanation of how the agent effectively reasons.
The second question is about directness: Given the popularity of the IRL paradigm, could we have simply used an (indirect) reward parameterization, instead of our (direct) mean-vector parameterization? As it turns out, in addition to the “immediate” interpretability of direct, action-level representations,
2In healthcare, diseases are often modeled in terms of states, and beliefs over disease states are eminently transparent factors that medical practitioners (i.e. domain experts) readily comprehend and reason about [40, 41].
it comes with an extra perk w.r.t. computability: While it is (mathematically) possible to formulate a similar learning problem swapping out µ for rewards, in practice it is (computationally) intractable to perform in our setting. The precise difficulty lies in differentiating through quantities π(at|bt)—which in turn depend on beliefs and dynamics—in the action-likelihoods (proofs located in Appendix B):
Proposition 1 (Differentiability with q-Parameterizations) Consider softmax policies parameterized by q-values from a reward function, such that π(a|b) = eq∗(b,a)/ ∑ a′ e q∗(b,a ′) in lieu of Equation 2. Then differentiating through log π(at|bt) terms with respect to unknown parameters θ is intractable. In contrast, INTERPOLE avoids ever needing to solve any “forward” problem at all (and therefore does not require resorting to costly—and approximate—sampling-based workarounds) for learning:
Proposition 2 (Differentiability with µ-Parameterizations) Consider the mean-vector policy parameterization proposed in Equation 2. Differentiation through the log π(at|bt) terms with respect to the unknown parameters θ is easily and automatically performed using backpropagation through time.
5 ILLUSTRATIVE EXAMPLES
Three aspects of INTERPOLE deserve empirical demonstration, and we shall highlight them in turn:
• Interpretability: First, we illustrate the usefulness of our method in providing transparent explanations of behavior. This is our primary objective here—–of explaining by imitating.
• Accuracy: Second, we demonstrate that the faithfulness of learned policies is not given up for transparency. This shows that accuracy and interpretability are not necessarily opposed.
• Subjectivity: Third, we show INTERPOLE correctly recovers underlying explanations for behavior—–even if the agent is biased. This sets us apart from other state-based algorithms.
In order to do so, we show archetypical examples to exercise our framework, using both simulated and real-world experiments in the context of disease diagnosis. State-based reasoning is prevalent in research and practice: three states in progressive clinical dementia [42, 43], preterminal cancer screening [44, 45], or even—as recently shown—for cystic fibrosis [46] and pulmonary disease [47].
Decision Environments For our real-world setting, we consider the diagnostic patterns for 1,737 patients during sequences of 6-monthly visits in the Alzheimer’s Disease Neuroimaging Initiative [48] database (ADNI). The state space consists of normal functioning (“NL”), mild cognitive impairment (“MCI”), and dementia. For the action space, we consider the decision problem of ordering vs. not ordering an MRI test, which—while often informative of Alzheimer’s—is financially costly. MRI outcomes are categorized according to hippocampal volume: {“avg”, “above avg”, “below avg”, “not ordered”}; separately, the cognitive dementia rating-sum of boxes (“CDR-SB”) result— which is always measured—is categorized as: {“normal”, “questionable impairment”, “mild/severe dementia”} [42]. In total, the observation space therefore consists of the 12 combinations of outcomes. We also consider a simulated setting to better validate performance. For this we employ a diagnostic environment (DIAG) in the form of an IOHMM with certain (true) parameters T true, Otrue, btrue1 . Patients fall within diseased (s+) and healthy (s−) states, and vital-sign measurements available at every step are classified within positive (z+) and negative (z−) outcomes. For the action space, we consider the decision of continuing to monitor a patient (a=), or stopping and declaring a final diagnosis—and if so, a diseased (a+) or healthy (a−) declaration. If we assume agents have perfect knowledge of the true environment, then this setup is similar to the classic “tiger problem” for optimal stopping [49]. Lastly, we also consider a (more realistic) variant of DIAG where the agent’s behavior is instead generated by biased beliefs due to incorrect knowledge T,O, b1 6= T true, Otrue, btrue1 of the environment (BIAS). Importantly, this generates a testable version of real-life settings where decision-makers’ (subjective) beliefs often fail to coincide with (objective) probabilities in the world.
Benchmark Algorithms Where appropriate, we compare INTERPOLE against the following benchmarks: imitation by behavioral cloning [5] using RNNs for partial observability (R-BC); Bayesian IRL on POMDPs [9] equipped with a learned environment model (PO-IRL); a fully-offline counterpart [10] of Bayesian IRL (Off. PO-IRL); and an adaptation of model-based imitation learning [7] to partially-observable settings, with a learned IOHMM as the model (PO-MB-IL). Algorithms requiring learned models for interaction are given IOHMMs estimated using conventional methods [50]. Further information on environments and benchmark implementations is found in Appendix C.
Interpretability First, we direct attention to the potential utility of INTERPOLE as an investigative device for auditing and quantifying individual decisions. Specifically, modeling the evolution of an agent’s beliefs provides a concrete basis for analyzing the corresponding sequence of actions taken:
• Explaining Trajectories. Figure 3 shows examples of such decision trajectories for four real ADNI patients. Each vertex of the belief simplex corresponds to one of the three stable diagnoses, and each point in the simplex corresponds to a unique belief (i.e. probability distribution). The closer the point is to a vertex (i.e. state), the higher the probability assigned to that state. For instance, if the belief is located exactly in the middle of the simplex (i.e. equidistant from all vertices), then all states are believed to be equally likely. If the belief is located exactly on a vertex (e.g. directly on top of MCI), then this corresponds to an absolutely certainty of MCI being the underlying state. Patients (a) and (b) are “typical” patients who fit well to the overall learned policy. The former is a normally-functioning patient believed to remain around the decision boundary in all visits except the first; appropriately, they are ordered an MRI during approximately half of their visits. The latter is believed to be deteriorating from MCI towards dementia, hence prescribed an MRI in all visits.
• Identifying Belated Diagnoses. In many diseases, early diagnosis is paramount [51]. INTERPOLE allows detecting patients who appear to have been diagnosed significantly later than they should have. Patient (c) was ordered an MRI in neither of their first two visits—despite the fact that the “typical” policy would have strongly recommended one. At a third visit, the MRI that was finally ordered led to near-certainty of cognitive impairment—but this could have been known 12 months earlier! In fact, among all ADNI patients in the database, 6.5% were subject to this apparent pattern of “belatedness”, where a late MRI is immediately followed by a jump to near-certain deterioration.
• Quantifying Value of Information. Patient (d) highlights how INTERPOLE can be used to quantify the value of a test in terms of its information gain. While the patient was ordered an MRI in all of their visits, it may appear (on the surface) that the third and final MRIs were redundant—since they had little apparent affect on beliefs. However, this is only true for the factual belief update that occurred according to the MRI outcome that was actually observed. Having access to an estimated model of how beliefs are updated in the form of decision dynamics, we can also compute counterfactual belief updates—that is belief updates that could have occurred if the MRI outcome in question were to be different. In the particular case of patient (d), the tests were in fact highly informative, since (as it happened) the patient’s CDR-SB scores were suggestive of impairment, and (in the counterfactual) the doctor’s beliefs could have potentially leapt drastically towards MCI. On the other hand, among all MRIs ordered for ADNI patients, 19% may indeed have been unnecessary (i.e. triggering apparently insignificant belief-updates both factually as well as counterfactually).
Evaluating Interpretability through Clinician Surveys
To cement the argument for interpretability, we evaluated INTERPOLE by consulting nine clinicians from four different countries (United States, United Kingdom, the Netherlands, and China) for feedback. We focused on evaluating two aspects of interpretability regarding our method:
• Decision Dynamics: Whether the proposed representation of (possibly subjective) belief trajectories are preferable to raw action-observation trajectories—that is, whether decision dynamics are a transparent way of modeling how information is aggregated by decision-makers.
• Decision Boundaries: Whether the proposed representation of (possibly suboptimal) decision boudnaries are a more transparent way of describing policies, compared with the representation of reward functions (which is the conventional approach in the policy learning literature).
For the first aspect, we presented to the participating clinicians the medical history of an example patient from ADNI represented in three ways using: only the most recent action-observation, the complete action-observation trajectory, as well as the belief trajectory as recovered by INTERPOLE. Result: All nine clinicians preferred the belief trajectories over action-observation trajectories.
For the second aspect, we showed them the policies learned from ADNI by both Off. PO-IRL and INTERPOLE, which parameterize policies in terms of reward functions and decision boundaries respectively. Result: Seven out of nine clinicians preferred the representation in terms of decision boundaries over that offered by reward functions. Further details can be found in Appendix D.
INTERPOLE 0.6± 0.1 0.8± 0.2 5.38± 1.14
Accuracy Now, a reasonable question is whether such explainability comes at a cost: By learning an interpretable policy, do we sacrifice any accuracy? To be precise, we can ask the following questions:
• Is the belief-update process the same? For this, the appropriate metric is the discrepancy with respect to the sequence of beliefs—which we take to be ∑ tDKL(bt‖b̂t) (Belief Mismatch).
• Is the belief-action mapping the same? Our metric is the discrepancy with respect to the policy distribution itself—which we take to be∑ tDKL(πb(·|bt)‖π̂(·|b̂t)) (Policy Mismatch).
• Is the effective behavior the same? Here, the metrics are those measuring the discrepancy with respect to ground-truth actions observed (Action-Matching) for ADNI, and differences in stopping (Stopping Time Error) for DIAG.
Note that the action-matching and stopping time errors evaluate the quality of learned models in imitating per se, whereas belief mismatch and policy mismatch evaluate their quality in explaining.3
The results are revealing, if not necessarily surprising. To begin, we observe for the ADNI setting in Table 2 that INTERPOLE performs first- or second-best across all three action-matching based metrics; where it comes second, it does so only by a small margin to R-BC (bearing in mind that R-BC is specifically optimized for nothing but action-matching). Similarly for the DIAG setting, we observe in Table 3 that INTERPOLE performs the best in terms of stopping-time error. In other words, it appears that little—if any—imitation accuracy is lost by using INTERPOLE as the model.
Perhaps more interestingly, we also see in Table 3 that the quality of internal explanations is superior— in terms of both belief mismatch and policy mismatch. In particular, even though the comparators PO-MB-IL, PO-IRL, and Off. PO-IRL are able to map decisions through beliefs, they inherit the conventional approach of attempting to estimate true environment dynamics, which is unnecessary— and possibly detrimental—if the goal is simply to find the most likely explanation of behavior. Notably, while the difference in imitation quality among the various benchmarks is not tremendous, with respect to explanation quality the gap is significant—where INTERPOLE has great advantage.
Subjectivity Most significantly, we now show that INTERPOLE correctly recovers the underlying explanations, even if—or perhaps especially if—the agent is driven by subjective reasoning (i.e. with biased beliefs). This aspect sets INTERPOLE firmly apart from the alternative state-based techniques.
3Belief/policy mismatch are not applicable to ADNI since we have no access to ground-truth beliefs/policies.
INTERPOLE 3.8± 0.34 1.8± 0.6 2.14± 0.51
Consider the BIAS environment: Here the true environment dynamics are unchanged from DIAG, but no longer coincide with the agent’s decision dynamics. Specifically, we let the behavioral policy be generated using erroneous parameters O(z−|a=, s+) < Otrue(z−|a=, s+); that is, the doctor now incorrectly believes the test to have a smaller false-negative rate than it does in reality —thus biasing their beliefs regarding patient states.
Now suppose we wish to recover the decision boundary from the demonstrations—that is, at what confidence threshold does a doctor commit to a healthy diagnosis? Figure 4 shows that both INTERPOLE and PO-MB-IL appear to recover the correct effective behavior: Starting from a neutral prior, doctors tend to stop and issue a “healthy” diagnosis if the first two observations return negative signals. Importantly, INTERPOLE also correctly locates the confidence target∼ 90%. On the other hand, PO-MB-IL—which first attempts to estimate the environment’s true parameters—ends up learning a policy on the basis of miscalibrated beliefs, thereby incorrectly explaining the same effective behavior with a lower confidence target ∼ 70%. Finally, through similarly benchmarked metrics, Table 4 further confirms INTERPOLE’s advantage in providing the (correct) explanations.
6 DISCUSSION
Three points deserve brief comment in closing. First, while we gave prominence to several examples of how INTERPOLE may be used to audit and improve decision-making behavior, the potential applications are not limited to these use cases. Broadly, the chief proposition is that having quantitative descriptions of belief trajectories provides a concrete language that enables investigating observed actions, including outliers and variations in behavior: On that basis, different statistics and post-hoc analyses can be performed on top of INTERPOLE’s explanations (see Appendix C for examples).
Second, a reasonable question is whether or not it is reasonable to assume access to the state space. For this, allow us to reiterate a subtle distinction. There may be some “ground-truth” external state space that is arbitrarily complex or even impossible to discover, but—as explained—we are not interested in modeling this. Then, there is the internal state space that an agent uses to reason about decisions, which is what we are interested in. In this sense, it is certainly reasonable to assume access to the state space, which is often very clear from medical literature [42–47]. Since our goal is to obtain interpretable representations of decision, it is therefore reasonable to cater precisely to these accepted state spaces that doctors can most readily reason with. Describing behavior in terms of beliefs over these (already well-understood) states is one of the main contributors to the interpretability of our method.
Finally, it is crucial to keep in mind that INTERPOLE does not claim to identify the real intentions of an agent: humans are complex, and rationality is—of course—bounded. What it does do, is to provide an interpretable explanation of how an agent is effectively behaving, which—–as we have seen for diagnosis of ADNI patients—–offers a yardstick by which to assess and compare trajectories and subgroups. In particular, INTERPOLE achieves this while adhering to our key criteria for healthcare settings, and without imposing assumptions of unbiasedness or optimality on behavioral policies.
ACKNOWLEDGMENTS
This work was supported by the US Office of Naval Research (ONR) and Alzheimer’s Research UK (ARUK). We thank the clinicians who participated in our survey, the reviewers for their valuable feedback, and the Alzheimer’s Disease Neuroimaging Initiative for providing the ADNI dataset.
A DETAILS OF THE ALGORITHM
A.1 FORWARD-BACKWARD PROCEDURE
We compute the necessary marginalizations of the joint distribution P(D̄|D, θ̂) using the forwardbackward algorithm. Letting xt:t′ = {xt, xt+1, . . . , xt′} for any time-indexed quantity xt, the forward messages are defined as αt(s) = P(st = s,a1:t−1, z1:t−1|θ̂), which can be computed dynamically as
αt+1(s ′) = P(st+1 = s′,a1:t, z1:t|θ̂) = ∑ s∈S P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
= ∑ s∈S αt(s)π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)
∝ ∑ s∈S αt(s)T̂ (s ′|s, at)Ô(zt|at, s′)
with initial case α1(s) = P(s1 = s) = b1(s). The backward messages are defined as βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂), which can also be computed dynamically as
βt(s) = P(at:τ , zt:τ |st = s,a1:t−1, z1:t−1, θ̂) = ∑ s′∈S P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂)
= ∑ s′∈S π̂(at|bt)T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
∝ ∑ s′∈S T̂ (s′|s, at)Ô(zt|at, s′)βt+1(s′)
with initial case βτ+1(s) = P(∅|sτ+1 = s,a1:τ , z1:τ , θ̂) = 1.
Then, the marginal probability of being in state s at time t given the dataset D and the estimate θ̂ can be computed as
γt(s) = P(st = s|D, θ̂) = P(st = s|a1:τ , z1:τ , θ̂) ∝ P(st = s,a1:τ , z1:τ |θ̂) = αt(s)β(s)
and similarly, the marginal probability of transitioning from state s to state s′ at the end of time t given the dataset D and the estimate θ̂ can be computed as
ξt(s, s ′) = P(st = s, st+1 = s′|D, θ̂) ∝ P(st = s, st+1 = s′,a1:τ , z1:τ |θ̂) = P(st = s,a1:t−1, z1:t−1|θ̂)P(st+1 = s′, at, zt|st = s,a1:t−1, z1:t−1, θ̂)
× P(at+1:τ , zt+1:τ |st+1 = s′,a1:t, z1:t, θ̂) = αt(s)π̂(at|bt)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) ∝ αt(s)T̂ (s′|s, a)Ô(z|a, s′)βt+1(s′) .
A.2 GRADIENT-ASCENT PROCEDURE
Taking the gradient of the expected log-likelihood Q(θ; θ̂) in (5) with respect to the unknown parameters θ = (T,O, b1, η, µa∈A) first requires computing the Jacobian matrix ∇btbt′ for 1 ≤
t < t′ ≤ τ , where (∇bb′)ij = ∂b′(i)/∂b(j) for i, j ∈ S. This can be achieved dynamically as ∇btbt′ = ∇bt+1bt′∇btbt+1 with initial case∇bt′ bt′ = I , where
(∇btbt+1)ij = ∂bt+1(i)
∂bt(j)
= ∂
∂bt(j)
[ ∑ x∈S bt(x)T (i|x, at)O(zt|at, i)∑
x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′) ] =
T (i|j, at)O(zt|at, i)∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′)
− ∑ x′∈S T (x
′|j, at)O(zt|at, x′) ( ∑ x∈S ∑ x′∈S bt(x)T (x ′|x, at)O(zt|at, x′))2 .
A.2.1 PARTIAL DERIVATIVES
The derivative of Q(θ; θ̂) with respect to T (s′|s, a) is
∂Q(θ; θ̂)
∂T (s′|s, a) =
∂
∂T (s′|s, a) n∑ i=1 [ τ∑ t=1 I{at = a} ∑ x∈S ∑ x′∈S ξt(x, x ′) log T (x′|x, a)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a) + τ∑ t=2 ∂ log π(at|bt) ∂T (s′|s, a) ]
= n∑ i=1 [ τ∑ t=1 I{at = a} ξt(s, s ′) T (s′|s, a)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇T (s′|s,a)bt′+1
] ,
where
(∇bt log π(at|bt))1j = ∂ log(at|bt) ∂bt(j)
= ∂
∂bt(j)
( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 (bt(j)− µa(j))
= −2η(bt(j)− µat(j)) + 2η ∑ a∈A π(a|bt)(bt(j)− µa(j))
and
(∇T (s′|s,a)bt′+1)i1 = ∂bt′+1(i)
∂T (s′|s, a)
= ∂
∂T (s′|s, a)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a} ( I{i = s′}bt′(s)O(zt′ |a, s′)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′)
− bt ′(s)O(zt′ |a, s′) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(zt′ |a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to O(z|a, s′) is
∂Q(θ; θ̂)
∂O(z|a, s′) =
∂
∂O(z|a, s′) n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} ∑ x′∈S γt+1(x ′) logO(z|a, x′)
+ τ∑ t=2 log π(at|bt)
]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′) + τ∑ t=2 ∂ log π(at|bt) ∂O(z|a, s′) ]
= n∑ i=1 [ τ∑ t=1 I{at = a, zt = z} γt+1(s ′) O(z|a, s′)
+ τ∑ t=2 t−1∑ t′=1 ∇bt log π(at|bt)∇bt′+1bt∇O(z|a,s′)bt′+1
] ,
where
(∇O(z|a,s′)bt′+1)i1 = ∂bt′+1(i)
∂O(z|a, s′)
= ∂
∂O(z|a, s′)
( ∑ x∈S bt′(x)T (i|x, at′)O(zt′ |at′ , i)∑
x∈S ∑ x′∈S bt′(x)T (x ′|x, at′)O(zt′ |at′ , x′) ) = I{at′ = a, zt′ = z} ( I{i = s′}∑x∈S bt′(x)T (s′|x, a)∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′)
− ∑ x∈S bt′(x)T (s
′|x, a) ( ∑ x∈S ∑ x′∈S bt′(x)T (x ′|x, a)O(z|a, x′))2
) .
The derivative of Q(θ; θ̂) with respect to b1(s) is
∂Q(θ; θ̂)
∂b1(s) =
∂
∂b1(s) n∑ i=1 [∑ x∈S γ1(x) log b1(x) + τ∑ t=1 log π(at|bt) ]
= n∑ i=1
[ γ1(s)
b1(s) + τ∑ t=1 ∇bt log π(at|bt)∇b1(s)bt
] ,
where (∇b1(s)bt)i1 = (∇b1bt)is.
The derivative of Q(θ; θ̂) with respect to η is
∂Q(θ; θ̂)
∂η =
∂
∂η n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂η ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A e−η‖bt−µa‖ 2∑ a′∈A e −η‖bt−µa′‖2 ‖bt − µa‖2 )
= n∑ i=1 τ∑ t=1 ( −‖bt − µat‖2 + ∑ a∈A π(a|bt)‖bt − µa‖2 ) .
Finally, the derivative of Q(θ; θ̂) with respect to µa(s) is
∂Q(θ; θ̂)
∂µa(s) =
∂
∂µa(s) n∑ i=1 τ∑ t=1 log π(at|bt)
= n∑ i=1 τ∑ t=1 ∂ ∂µa(s) ( −η‖bt − µat‖2 − log ∑ a′∈A e−η‖bt−µa′‖ 2 )
= n∑ i=1 τ∑ t=1
( 2ηI{at = a}(bt(s)− µa(s))− 2η e−η‖bt−µa‖ 2∑
a′∈A e −η‖bt−µa′‖2
(bt(s)− µa(s))
)
= n∑ i=1 τ∑ t=1 (2ηI{at = a}(bt(s)− µa(s))− 2ηπ(a|bt)(bt(s)− µa(s))
= n∑ i=1 τ∑ t=1 2η(I{at = a} − π(a|bt))(bt(s)− µa(s)) .
B PROOFS OF PROPOSITIONS
B.1 PROOF OF PROPOSITION 1
First, denote with q∗R ∈ R∆(S)×A the optimal (belief-state) q-value function with respect to the underlying (state-space) reward function R ∈ RS×A, and denote with v∗ ∈ R∆(S) the corresponding optimal value function v∗R(b) = softmaxa′∈A q ∗ R(b, a
′). Now, fix some component i of parameters θ; we wish to compute the derivative of log π(a|b) with respect to θi:
∂ ∂θi log π(a|b) = ∂ ∂θi
( q∗R(b, a)− v∗R(b) ) = ∂
∂θi
( q∗R(b, a)− log ∑ a′∈A eq ∗ R(b,a ′) )
= ∂
∂θi q∗R(b, a)− ∑ a′∈A
( eq ∗ R(b,a
′)∑ a′′∈A e q∗R(b,a ′′) · ∂ ∂θi q∗R(b, a ′)
)
= ∂
∂θi q∗R(b, a)− ∑ a′∈A π(a′|b) ∂ ∂θi q∗R(b, a ′)
= ∂
∂θi q∗R(b, a)− Ea′∼π(·|b)
[ ∂
∂θi q∗R(b, a ′) ] where we make explicit here the dependence onR, but note that it is itself a parameter; that is, R = θj for some j. We see that this in turn requires computing the partial derivative ∂q∗R(b, a)/∂θi. Let γ be some appropriate discount rate, and denote with ρR ∈ R∆(S)×A the effective (belief-state) reward ρR(b, a) . = ∑ s∈S b(s)R(s, a) corresponding to R. Further, let
P(b′|b, a) = ∑ z∈Z P(z|b, a)P(b′|b, a, z)
= ∑ z∈Z (∑ s∈S ∑ s′∈S b(s)T (s′|s, a)O(z|a, s′) ) δ ( b′ − ∑ s∈S b(s)T (·|s, a)O(z|a, ·)∑ s∈S ∑ s′∈S b(s)T (s ′|s, a)O(z|a, s′) ) denote the (belief-state) transition probabilities induced by T and O, where δ is the Dirac delta function such that δ(b′) integrates to one if and only if b′ = 0 is included in the integration region. Then the partial ∂q∗R(b, a)/∂θi is given as follows:
∂
∂θi q∗R(b, a) =
∂
∂θi
( ρR(b, a) + γ ∫ b′∈∆(S) P(b′|b, a)v∗R(b′)db′ )
= ∂
∂θi ρR(b, a) + γ ∫ b′∈∆(S) v∗R(b ′) ∂
∂θi P(b′|b, a)db′︸ ︷︷ ︸
ρR,i(b,a)
+ γ ∫ b′∈∆(S) P(b′|b, a)Ea′∼π(·|b′) [ ∂ ∂θi q∗R(b ′, a′) ] db′
from which we observe that ∂q∗R(b, a)/∂θi is a fixed point of a certain Bellman-like operator. Specifically, fix any function f ∈ R∆(S)×A; then ∂q∗R(b, a)/∂θi is the fixed point of the operator
T πR,i : R∆(S)×A → R∆(S)×A defined as follows:
(T πR,if)(b, a) = ρR,i(b, a) + γ ∫ b′∈∆(S) P(b′|b, a) ∑ a′ π(a′|b′)f(b′, a′)db′
which takes the form of a “generalized” Bellman operator on q-functions for POMDPs, where for brevity here we have written ρR,i(b, a) to denote the expression ∂∂θi ρR(b, a) + γ ∫ b′∈∆(S) v ∗ R(b ′) ∂∂θiP(b
′|b, a)db′. Mathematically, this means that a recursive procedure can in theory be defined—cf. “∇q-iteration”, analogous to q-iteration; see e.g. [52]—that may converge on the gradient under appropriate conditions. Computationally, however, this also means that taking a single gradient is at least as hard as solving POMDPs in general.
Further, note that while typical POMDP solvers operate by taking advantage of the convexity property of ρR(b, a)—see e.g. [53]—here there is no such property to make use of: In general, it is not the case that ρR,i(b, a) is convex. To see this, consider the following counterexample: Let S . = {s−, s+},A . = {a=}, Z . = {z−, z+}, T (s−|s−, a=) = T (s+|s+, a=) = p = 1, O(z−|a=, s−) = O(z+|a=, s+) = 1/4, b1(s+) = 1/2, R(s−, a=) = 0, R(s+, a=) = 1/2, and γ = 1/2. For simplicity, we will simply write b instead of b(s+). Note that:
q∗R(b, a=) = b ∑ t=0 γtR(s+, a=) + (1− b) ∑ t=0 γtR(s−, a=) = b
v∗R(b) = log ∑
a∈{a=}
eq ∗ R(b,a) = log eq ∗ R(b,a=) = q∗R(b, a=) = b
P(z+|b, a=) = 1 4 (bp+ (1− b)(1− p)) + 3 4 (b(1− p) + (1− b)p) = 1 4 b+ 3 4 (1− b) P(z−|b, a=) = 1
4 (b(1− p) + (1− b)p) + 3 4 (bp+ (1− b)(1− p)) = 1 4 (1− b) + 3 4 b
b′|b, a=, z+ = P(s′ = s+, z+|b, a=)
P(z+|b, a=) =
1 4bp+ 3 4 (1− b)(1− p)
P(z+|b, a=) =
1 4b
1 4b+ 3 4 (1− b)
b′|b, a=, z− = P(s′ = s+, z−|b, a=)
P(z−|b, a=) =
1 4 (1− b)(1− p) + 3 4bp
P(z−|b, a=) =
3 4b
1 4 (1− b) + 3 4b
P(b′|b, a=) = P(z+|b, a=) if b′ = b′|b, a=, z+ P(z−|b, a=) if b′ = b′|b, a=, z− 0 otherwise
Now, let the elements of θ be ordered such that p is the i-th element, and consider ρR,i(b, a)— evaluated at p = 1:
ρR,i(b, a=) . =
∂
∂p ρR(b, a=) + γ ∫ b′∈∆(S) v∗R(b ′) ∂ ∂p P(b′|b, a=)db′
= 1
2
( v∗R(b ′|b, a=, z+) ∂
∂p P(z+|b, a=) + v∗R(b′|b, a=, z−)
∂
∂p P(z−|b, a=) ) = 1
2
( 1 4b
1 4b+ 3 4 (1− b)
( 1
4 b− 1 4 (1− b)− 3 4 b+ 3 4 (1− b) ) + 3 4b
1 4 (1− b) + 3 4b
( −1
4 b+
1 4 (1− b) + 3 4 b− 3 4 (1− b) )) Clearly ρR,i(b, a=) cannot be convex since ρR,i(1/2, a=) = 0 and ρR,i(1, a=) = 0 but ρR,i(3/4, a=) > 0.
B.2 PROOF OF PROPOSITION 2
In contrast, unlike the indirect q-value parameterization above (which by itself requires approximate solutions to optimization problems), the mean-vector parameterization of INTERPOLE maps beliefs
directly to distributions over actions. Now, the derivatives of log π(a|b) are given as closed-form expressions in Appendices A.1 and A.2.
In particular, note that each bt is computed through a feed-forward structure, and therefore can easily be differentiated with respect to the unknown parameters θ through backpropagation through time: Each time step leading up to an action corresponds to a “hidden layer” in a neural network, and the initial belief corresponds to the “features” that are fed into the network; the transition and observation functions correspond to the weights between layers, the beliefs at each time step correspond to the activations between layers, the actions themselves correspond to class labels, and the action likelihood corresponds to the loss function (see Appendices A.1 and A.2).
Finally, note that computing all of the forward-backward messages αt and βt in Appendix A.1 has complexityO(nτS2), computing all of the Jacobian matricies∇btbt′ in Appendix A.2 has complexity O(nτ2S3), and computing all of the partial derivatives given in Appendix A.2 has complexity at most O(nτ2S2AZ). Hence, fully differentiating the expected log-likelihood Q(θ; θ̂) with respect to the unknown parameters θ has an overall (polynomial) complexity O(nτ2S2 max{S,AZ}).
C EXPERIMENT PARTICULARS
C.1 DETAILS OF DECISION ENVIRONMENTS
ADNI We have filtered out visits without a CDR-SB measurement, which is almost always taken, and visits that do not occur immediately after the six-month period following the previous visit but instead occur after 12 months or later. This filtering leaves 1,626 patients with typically three consecutive visits each. For MRI outcomes, average is considered to be within half a standard deviation of the population mean. Since there are only two actions in this scenario, we have set η = 1 and relied on the distance between the two means to adjust for the stochasticity of the estimated policy—closer means being somewhat equivalent to a smaller η.
DIAG We set T true(s−|s−, ·) = T true(s+|s+, ·) = 1, meaning patients do not heal or contract the diseases as the diagnosis progresses, Otrue(z−|a=, s+) = Otrue(z+|a=, s−) = 0.4, meaning measurements as a test have a false-negative and false-positive rates of 40%, and btrue1 (s+) = 0.5. Moreover, the behavior policy is given by T = T true, O = Otrue, b1 = btrue1 , η = 10, µa=(s+) = 0.5, and µa−(s−) = µa+(s+) = 1.3. Intuitively, doctors continue monitoring the patient until they are 90% confident in declaring a final diagnosis. In this scenario, T and η are assumed to be known. The behavior dataset is generated as 100 demonstration trajectories.
BIAS We set all parameters exactly the same way we did in DIAG with one important exception: now O(s−|a=, z+) = 0.2 while it is still the case that Otrue(z−|a=, s+) = 0.4, meaning O 6= Otrue anymore. In this scenario, b1 is also assumed to be known (in addition to T and η) to avoid any invariances between b1 and O that we have encountered during training. The behavioral dataset is generated as 1000 demonstration trajectories.
C.2 DETAILS OF BENCHMARK ALGORITHMS
R-BC We train an RNN whose inputs are the observed histories ht and whose outputs are the predicted probabilities π̂(a|ht) of taking action a given the observed history ht. The network consists of an LSTM unit of size 64 and a fully-connected hidden layer of size 64. We minimize the crossentropy loss L = − ∑n i=1 ∑τ t=1 ∑ a∈A I{at = a} log π̂(a|ht) using Adam optimizer with learning rate 0.001 until convergence, that is when the cross-enropy loss does not improve for 100 consecutive iterations.
PO-IRL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. The reward parameter R is initialized as R̂0(s, a) = εs,a where εs,a ∼ N (0, 0.0012). Then, it is estimated via Markov chain Monte Carlo (MCMC) sampling, during which new candidate samples are generated by adding Gaussian noise with standard deviation 0.001 to the last sample. A final estimate is formed by averaging every tenth sample among the second set of 500 samples, ignoring the first 500 samples. In order to compute optimal q-values, we have used an off-the-shelf POMDP solver available at https://www.pomdp.org/code/index.html.
Off. PO-IRL All parameters are initialized exactly the same way as in PO-IRL. Then, both the IOHMM parameters T , O, and b1, and the reward parameter R are estimated jointly via MCMC sampling. When generating new candidate samples, with equal probabilities, we have either sampled new T , O, and b1 from IOHMM posterior (without changing R) or obtained a new R the same way we did in PO-IRL (without changing T , O, and b1). A final estimate is formed the same way as in PO-IRL.
PO-MB-IL The IOHMM parameters T , O, and b1 are initialized by sampling them uniformly at random. Then, they are estimated and fixed using conventional IOHMM methods. Given the IOHMM parameters, we parameterized policies the same way we did in INTERPOLE, that is as described in (2). The policy parameters {µa}a∈A are initialized as µ̂0a(s) = (1/|S| + εa,s)/ ∑ s′∈S(1/|S| + εa,s′) where εa,s′ ∼ N (0, 0.0012). Then, they are estimated according solely to the action likelihoods in (4) using the EM algorithm. The expected log-posterior is maximized using Adam optimizer with learning rate 0.001 until convergence, that is when the expected log-posterior does not improve for 100 consecutive iterations.
INTERPOLE All parameters are initialized exactly the same way as in PO-MB-IL. Then, the IOHMM parameters T , O, and b1, and the policy parameters {µa}a∈A are estimated jointly according to both the action likelihoods and the observation likelihoods in (4). The expected log-posterior is again maximized using Adam optimizer with learning rate 0.001 until convergence.
C.3 FURTHER EXAMPLE: POST-HOC ANALYSES
Policy representations learned by INTEPOLE provide users with means to derive concrete criteria that describe observed behavior in objective terms. These criteria, in turn, enable the quantitative analyses of the behavior using conventional statistical methods. For ADNI, we have considered two such criteria: belatedness of individual diagnoses and informativeness of individual tests. Both of these criteria are relevant to the discussion of early diagnosis, which is paramount for Alzheimer’s disease [51] as we have already mentioned during the illustrative examples.
Formally, we consider the final diagnoses of a patient to be belated if (i) the patient was not ordered an MRI in one of their visits despite the fact that an MRI being ordered was the most likely outcome according to the policy estimated by INTERPOLE and (ii) the patient was ordered an MRI in a later visit that led to a near-certain diagnosis with at least 90% confidence according to the underlying beliefs estimated by INTERPOLE. We consider An MRI to be uninformative if it neither (factually) caused nor could have (counterfactually) caused a significant change in the underlying belief-state of the patient, where an insignificant change is half a standard deviation less than the mean factual change in beliefs estimated by INTERPOLE.
Having defined belatedness and informativeness, one can investigate the frequency of belated diagnoses and uninformative MRIs in different cohorts of patients to see how practice varies between one cohort to another. In Table 5, we do so for six cohorts: all of the patients, patients who are over 75 years old, patients with apoE4 risk factor for dementia, patients with signs of MCI or dementia since their very first visit, female patients, and male patients. Note that increasing age, apoE4 allele, and female gender are known to be associated with increased risk of Alzheimer’s disease [54–57]. For instance, we see that uninformative MRIs are much more prevalent among patients with signs of MCI or dementia since their first visit. This could potentially be because these patients are monitored much more closely than usual given their condition.
Alternatively, one can divide patients into cohorts based on whether they have a belated diagnoses or an uninformative MRI to see which features these criteria correlate with more. We do so in Table 6. For instance, we see that a considerable percentage of belated diagnoses are seen among male patients.
C.4 FURTHER EXAMPLE: DECISION TREES
Clinical practice guidelines are often given in the form of decision trees, which usually have vague elements that require the judgement of the practitioner [58, 59]. For example, the guideline could ask the practitioner to quantify risks, side effects, or improvements in subjective terms such as being significant, serious, or potential. Using direct policy learning, how vague elements like these are commonly resolved in practice can be learned in objective terms.
Formulating policies in terms of IOHMMs and decision boundaries is expressive enough to model decision trees. An IOHMM with deterministic observations, that is O(z|a, s′) = 1 for some z ∈ Z and for all a ∈ A, s ∈ S, essentially describes a finite-state machine, inputs of which are equivalent to the observations. Similarly, a deterministic decision tree can be defined as a finite-state machine with no looping sequence of transitions. The case where the observations are probabilistic rather than deterministic correspond to the case where the decision tree is traversed in a probabilistic way so that each path down the tree has a probability associated with it at each step of the traversal.
As a concrete example of modeling decision trees in terms of IOHMMs, consider the scenario of diagnosing a disease with two sub-types: Disease-A and Disease-B. Figure 5a depicts the policy of the doctors in the form of a decision tree. Each newly-arriving patient is first tested for the disease in a general sense without any distinction between the two sub-types it has. The patient is then tested
for a specific sub-type of the disease only if the doctors deem there is a significant risk that the patient is diseased. Note that which exact level of confidence constitutes as a significant risk is left vague in the decision tree. By modeling this scenario using our framework, we can learn: (i) how the risk is determined based on initial test results and (ii) what amount of risk is considered significant enough to require a subsequent test for the sub-type.
Let S = {INI, HLT, DIS, DSA, DSB}, where INI denotes that the patient has newly arrived, HLT denotes that the patient is healthy, DIS denotes that the patient is diseased, DSA denotes that the patient has Disease-A, and DSB denotes that the patient has Disease-B. Figure 5b depicts the state space S with all possible transitions. Note that the initial belief b1 is such that b1(INI) = 1. Let A = {TST-DIS, TST-TYP, STP-HLT, STP-DSA, STP-DSB}, where TST-DIS denotes testing for the disease, TST-TYP denotes testing for the sub-type of the disease, and the remaining actions denote stopping and diagnosing the patient with one of the terminal states, namely states HLT, DSA, and DSB.
After taking action a1 = TST-DIS and observing some initial test result z1 ∈ Z, the risk of disease, which is the probability that the patient is diseased, can be calculated with a simple belief update:
b2(DIS) ∝ ∑ s∈S b1(s)T (DIS|s, TST-DIS)O(z1|TST-DIS, DIS)
= T (DIS|INI, TST-DIS)O(z1|TST-DIS, DIS) . Moreover, we can say that the doctors are more likely to test for the sub-type of the disease as opposed to stopping and diagnosing the patient as healthy, that is πb(TST-TYP|b2) > πb(STP-HLT|b2), when
b2(DIS) > µTST-TYP(DIS) + µSTP-HLT(DIS)
2
assuming µTST-TYP(DIS) > µSTP-HLT(DIS). Note that there are only two possible actions at the second time step: actions TST-TYP and STP-HLT.
D DETAILS OF THE CLINICIAN SURVEYS
Each participant was provided a short presentation explaining (1) the ADNI dataset and the decisionmaking problem we consider, (2) what rewards and reward functions are, (3) what beliefs and belief simplices are, and (4) how policies can be represented in terms of reward functions as well as decision boundaries. Then, they were asked two multiple-choice questions, one that is strictly about representing histories, and one that is strictly about representing policies. Importantly, the survey was conducted blindly—i.e. they were given no context whatsoever as pertains this paper and our proposed method. The question slides can be found in Figures 6 and 7. Essentially, each question first states a hypothesis and shows two/three representations relevant to the hypothesis stated. Then, the participant is asked which of the representations shown most readily expresses the hypothesis. Here are the full responses that we have received, which includes some additional feedback:
• Clinician 1 Question 1: C > B > A Question 2: B Additional Feedback: The triangle was initially more confusing than not, but the first example (100% uncertainty) was helpful. It isn’t clear how the dots in the triangle are computed. Are these probabilities based on statistics? Diagram is always better than no diagram.
• Clinician 2 Question 1: C > B > A Question 2: B Additional Feedback: I always prefer pictures to tables, they are much easier to understand.
• Clinician 3 Question 1: C > B > A Question 2: B Additional Feedback: Of course the triangle is more concise and easier to look at. But how is the decision boundary obtained? Does the decision boundary always have to be parallel to one of the sides of the triangle?
• Clinician 4 Question 1: C > B > A Question 2: B Additional Feedback: [Regarding Question 1,] representation A and B do not show any interpretation of the diagnostic test results, whereas representation C does. I think doctors are most familiar with representation B, as it more closely resembles the EHR. Although representation C is visually pleasing, I’m not sure how the scale of the sides of the triangle should be interpreted. [Regarding Question 2,] again I like the triangle, but it’s hard to interpret what the scale of the sides of the triangle mean. I think option A is again what doctors are more familiar with.
• Clinician 5 Question 1: C > B > A Question 2: A
• Clinician 6 Question 1: C > B > A Question 2: A
• Clinician 7 Question 1: C Question 2: B Additional Feedback: I thought I’d share with you my thoughts on the medical aspects in your scenario first (although I realise you didn’t ask me for them). [...] The Cochrane review concludes that MRI provides low sensitivity and specificity and does not qualify it as an add on test for the early diagnosis due to dementia (Lombardi G et. al. Cochrane database 2020). The reason for MRI imaging is (according to the international guidelines) to exclude non-degenerative or surgical causes of cognitive impairment. [...] In your example the condition became apparent when the CDR-SB score at Month 24 hit 3.0 (supported by the sequence of measurements over time showing worsening CDR-SB score). I imagine the MRI was triggered by slight worsening in the CDR-SB score (to exclude an alternative diagnosis). To answer your specific questions: Q1. The representation C describes your (false) hypothesis that it was the MRI that made the diagnosis of MCI more likely/apparent the best—I really like the triangles. Q2. I really like the decision boundary.
• Clinician 8 Question 1: C > B > A Question 2: B
• Clinician 9 Question 1: C Question 2: B Additional Feedback: Q1. Representation C gives the clearest illustration of the diagnostic change following MRI. However, the representation of beliefs on a continuous spectrum around discrete cognitive states could be potentially confusing given that cognitive function is itself a continuum (and ‘MCI’, ‘Dementia’ and ‘NL’ are stations on a spectrum rather than discrete states). Also, while representation C is the clearest illustration, it is the representation that conveys the least actual data and it isn’t clear from the visualisation exactly what each shift in 2D space represents. Also, the triangulation in ‘C’ draws a direct connection between NL and Dementia, implying that this is a potential alternative route for disease progression, although this is more intuitively considered as a linear progression from NL to MCI to Dementia. Q2. For me, the decision boundary representation best expresses the concept of the likelihood of ordering and MRI with the same caveats described above. Option B does best convey the likelihood of ordering an MRI, but doesn’t convey the information value provided by that investigation. However, my understanding is that this is not what you are aiming to convey here. | 1. What are the strengths and weaknesses of the proposed method for obtaining an interpretable representation of a behavioral policy?
2. How does the proposed method compare to alternative approaches, such as the 'Agent Markov Model' introduced by Unhelkar and Shah?
3. What are the advantages and disadvantages of the proposed method compared to AMM?
4. How can the number of latent states be chosen in a more general setting for the proposed method?
5. How sensitive are the results of the proposed method to model misspecification, specifically regarding the number of latent states? | Review | Review
The paper proposes a method for obtaining an interpretable representation of some behavioral policy based on partial observations and in an offline learning setting. The paper is well written and the approach is clearly explained. I just have few comments/questions:
In terms of alternative approaches to modeling agent/decision maker behavior, I can think of at least one other alternative and I wonder how your method can be compared to it: ‘Agent Markov Model’ introduced by Unhelkar and Shah (Learning Models of Sequential Decision-Making with Partial Specification of Agent Behavior AAAI 2019) is also modeling agent behavior using a state space model with partial observations. Similar to your approach, they are also interested in modeling the agent and not the actual mechanics of the world. As far as I understand AMM also satisfies three key criteria presented in the paper. AMM also uses a simpler model for the policy; they assume agent follows a stationary Markov policy: \pi(a|s, z). Can you discuss advantages of your model over theirs?
Another advantage of AMM over INTERPOLE is that it can infer the number of latent states via a nonparametric Bayesian approach. In INTERPOLE, do you have any recommendation on how S can be chosen in a more general setting? How sensitive are the results w.r.t model misspecification (specifically w.r.t. the number of latent state)? |
ICLR | Title
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Abstract
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (“pre-training”) followed by episodic finetuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the “shot” setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
1 INTRODUCTION
Few-shot classification is the problem of learning a classifier using only a few examples. Specifically, the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’ learn about new classes from few examples. Success is evaluated on a number of test episodes, each posing a classification task between previously-unseen test classes. In each such episode, we are given a few examples, or “shots”, of each new class that can be used to adapt this model to the task at hand, and the objective is to correctly classify a held-out set of examples of the new classes.
A simple approach to this problem is to learn a classifier over the training classes, parameterized as a neural network feature extractor followed by a classification layer. While the classification layer is not useful at test time due to the class shift, the embedding weights that are learned during this “pre-training” phase evidently constitute a strong representation that can be used to tackle test tasks when paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to make predictions for each example in the test episode given the episode’s small training set. Alternatively, early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training, a regime where the training objective is expressed in terms of performance on a number of training episodes of the same structure as the test episodes, but with the classes sampled from the training set. It was hypothesized that this episodic approach captures a more appropriate inductive bias for the problem of few-shot classification and would thus lead to better generalization.
However, there is an ongoing debate about whether episodic training is in fact required for obtaining the best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al., 2020) proposed strong “pre-training” baselines that leverage common best practices for supervised training (e.g. normalization schemes, data augmentation) to obtain a powerful representation that works well for this task. Interestingly, other recent work combines the pre-training of a single classifier with episodic fine-tuning by removing the classification head and continuing to train the embedding network using the episodic inference algorithm that will be applied at test time (Triantafillou et al., 2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes
have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what is the nature of the modification it induces into the pre-trained solution? Under which conditions is it required in order to achieve the best performance?
As a step towards answering those questions, we investigate the effect of the shot used during episodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We are particularly interested in understanding whether the shot of the training episodes constitutes a source of information that the model can leverage to improve its few-shot classification performance on episodes of that shot at test time. Our analysis reveals that indeed a particular functionality that this fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particular shot; accomplished by performing the fine-tuning on episodes of that shot. However, perhaps unsurprisingly, we find that specializing to a given shot comes at the expense of hurting performance for other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of Prototypical Networks (Snell et al., 2017) where inferior performance was reported when the shot at training time did not match the shot at test time.
Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shotspecialization help us in practice? It is unrealistic to assume that we will always have the same number of labeled examples for every new class we hope to learn at test time, so we are interested in approaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune a separate episodic model for every shot, and intuitively that seems wasteful as we expect that tasks of similar shots should require similar models. Motivated by this, we propose to train a single shot-conditional model for specializing the pre-trained solution to a wide spectrum of shots without suffering trade-offs. This leads to a compact but flexible model that can be conditioned to be made appropriate for the shot appearing in each test episode.
In what follows we provide some background on few-shot classification and episodic models and then introduce our proposed shot-conditioning approach and related work. We then present our experimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe that our shot-conditional training approach is beneficial for obtaining a general flexible model that does not suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experiment with our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shot classification, and demonstrate its effectiveness in that challenging environment.
2 BACKGROUND
Problem definition Few-shot classification aims to classify test examples of unseen classes from a small labeled training set. The standard evaluation procedure involves sampling classification episodes by picking N classes at random from a test set of classes Ctest and sampling two disjoint sets of examples from the N chosen classes: a support set (or training set) of k labeled examples per class, and a query set (or test set) of unlabeled examples, forming N -way, k-shot episodes. The model is allowed to use the support set, in addition to knowledge acquired while training on a disjoint set of classes Ctrain, to make a prediction for examples in the query set, and is evaluated on its query set accuracy averaged over multiple test episodes.
Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate under the assumption that obtaining a model capable of few-shot classification requires training it on (mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standard supervised learning. These learning episodes are sampled in the same way as described above for test episodes, but with classes sampled from Ctrain this time. In other words, the model is trained to minimize a loss of the form:
ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθ(y∗ | x∗,S) (1) where S and Q are support and query sets sampled from the distribution PN,ktrain of N -way, k-shot training episodes induced by Ctrain, and θ represents the model’s parameters. This training regime is often characterized as meta-learning or learning to learn, i.e. learning over many episodes how to learn within an episode (from few labeled examples). Episodic models differ by their “inference
algorithm”, i.e. the manner in which pθ(y∗ | x∗,S) is computed to classify query examples based on the support set.
Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodic model which constructs a prototype φc for each class c in an episode as
φc = 1 |Sc| ∑ x∈Sc fθ(x), (2)
where f is an embedding function parametrized by θ and Sc represents the set of support examples belonging to class c, and classifies a given query example as
p(y∗ = c | x∗,S) = exp(−||x ∗ − φc||22)∑
c′ exp(−||x∗ − φc′ ||22) . (3)
3 SHOT CONDITIONAL EPISODIC (SCONE ) TRAINING
In this section we introduce Shot CONditional Episodic (SCONE ) training for the purpose of specializing a strong pre-trained model to solving few-shot classification tasks of a range of different shots, without suffering disproportionately for any shot.
Training objective Training episodically involves minimizing the objective shown in Equation 1. We first sample an episode from P k,Ntrain and compute a prediction pθ(y
∗ | x∗,S) for each query example x∗. We then compute the cross-entropy loss on the query set using those predictions and perform a parameter update by backpropagating its gradient with respect to θ into the inference algorithm. In this work we concern ourselves with models that use an embedding function fθ to obtain a representation for the support and query examples of each episode on top of which the inference algorithm is applied. In Prototypical Networks, for instance, fθ contains all of the model’s learnable parameters.
SCONE trains on episodes of varying shots and conditions the model on each episode’s shot distribution. (Figure 1) by minimizing
Ek∼Pk ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθk(y∗ | x∗,S) , (4) where Pk is the distribution over shots at training time and θk depends on an episode’s sampled shots. In the Appendix, we include an algorithm box outlining SCONE fine-tuning.
Conditioning mechanism Rather than learning a separate set of model parameters for each shot setting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioning mechanism which performs an affine feature-wise transformation of its input x based on conditioning information k (in our case, the episode’s number of shots):
FiLM(x) = γ(k) x+ β(k). (5)
The dependency of γ and β on k is handled by maintaining distinct values for each shot setting and selecting the appropriate γ and β based on an episode’s shot. Equivalently, we can think of our approach as a compact representation of many shot-specific feature extractors which share all but their FiLM layer parameters.
More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX-SHOT] range (where MAX-SHOT is a hyperparameter) and let all shots settings greater than or equal to MAXSHOT share the same FiLM parametrization. As is often the case in practice, instead of inserting FiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values of existing batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performing episodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training (i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLM parameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial to penalize the L2-norm of β (regularizing the offset towards 0) and the L2 norm of γ − 1 (regularizing the scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength of this FiLM weight decay.
Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, where different classes have different shots. In that case, instead of selecting a single set of FiLM parameters, we compute the FiLM parameters for an episode as the convex combination of the FiLM parameters associated with all shots found in the episode, where the weights of that combination are determined based on the frequency with which each shot appears in the episode.
Concretely, the episode’s “shot distribution” s (a vector of length MAX-SHOT) is obtained by averaging the one-hot representations of the shots of the classes appearing in an episode. In the special case of a class-balanced episode, the resulting average will be exactly a one-hot vector. This shot distribution is then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as an embedding lookup sTF in a matrix F of FiLM parameters using a shot distribution s.
Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters, which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTHSHOT procedure in the Appendix in Algorithm 1, which receives the shot s of a class (an integer), and a smoothing hyperparameter m (a float in [0, 1]) and returns the smoothed shot for that class, which is a vector of length MAX-SHOT. Essentially, the result of smoothing is that the returned vector representation of s is not strictly one-hot with only the position corresponding to the observed shot s being ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entries that are directly adjacent to s receive the value m, the entries two spots away from s the value m2, and so on, with entries further away from s receiving exponentially-decaying values.
4 RELATED WORK
Few-shot classification A plethora of models have been recently proposed for few-shot classification, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodic training was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015; Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is made via nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NN algorithm where the label of a query example is predicted to be the weighted average of the (one-hot) support labels with the weights determined by the similarity of that query to each support example.
Gradient-based episodic models are another popular family of approaches following the influential MAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunes the embedding weights along with a linear classifier head using gradient descent on the support set.
Intuitively, this results in learning an embedding space that serves as a useful starting point from which a few steps of gradient descent suffice to adapt the model to each episode’s classification task. Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier for each episode from the prototypes of the classes appearing in that episode.
Recently, the field has shifted towards studying few-shot classification in more realistic environments like tiered-ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which has encouraged research into newly-introduced challenges, such as accounting for multiple diverse datasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel task conditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach, and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each test episode out of a universal feature representation.
Understanding episodic learning Our work inscribes itself in a recent line of work attempting to understand the differences between episodic and non-episodic learning. Goldblum et al. (2020) attempts to understand episodic learning from the perspective of how classes cluster in feature-space (for models that learn a final classification layer on top of a feature extractor) as well as from the perspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chao et al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskill et al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use in non-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in their ability to generalize to new examples of previously seen classes or new examples of unseen classes. Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks to explain the observed performance drop when there is a mismatch between the shots at training and test time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of a pre-trained solution, in a larger-scale and more diverse environment.
Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) are used as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for a survey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the loss at each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on a scalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as a means of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) use FiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) uses it as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has also been used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shot learning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019) and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of the support set for this and thus the ‘shot’ information is discarded. The purpose of our conditioning mechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners is inspired by recent work that investigates loss-conditional training using feature-wise transformations (Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).
5 EXPERIMENTS
5.1 EXPLORING THE ROLE OF ‘SHOTS‘ DURING EPISODIC FINE-TUNING
In this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuning phase, and in particular how it impacts the resulting model’s ability to solve test episodes of different shots. We consider either using a fixed shot k throughout the fine-tuning phase, or fine-tuning on episodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well as SCONE fine-tuning that equips the model with the shot-conditioning mechanism described in the previous section. We also compare against EST (Cao et al., 2020).
Experimental setup We ran this round of experiments on ImageNet using the class splits proposed in Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet. We then removed the topmost classification layer, leaving us with a pre-trained backbone that we used as the initialization for the subsequent episodic fine-tuning round. We ran the following
variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusively on 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from the range [1, 40] (‘Fine-tune on all shots’), and on episodes with that same shot distribution but using SCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shot conditioning mechanism described in the previous section. We also consider ‘Fine-tune on best k-shot’, an additional baseline that fine-tunes exclusively on the shot k that is found to work best on average on the validation set (on the range of shots 1 − 40). For this, we trained models for k = 1, 5, 10, 15, 20, 30, 40 and found the best to be k = 15.
As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm of FiLM parameters. For a fair comparison with the other models, we applied the same regularization to the batch normalization parameters of all models during the episodic fine-tuning phase, and we found this to be generally helpful. We tuned the strength of this regularization separately for each model and picked the variant that worked best on the validation set, which we report in the Appendix. We set the SCONE ’s MAX-SHOT hyperparameter to be 40 for this experiment.
We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for building shot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of the learned embeddings that aims to strike a good balance between maximizing the inter-class variance and minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter ρ. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned ρ and the hyperparameter d controlling the projection dimensionality very extensively. The values that we found worked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantially different than those used in the original EST paper: d = 480 and ρ = 5e − 8 (versus the original d = 120 and ρ = 1e− 3). We believe that this discrepancy may be due to our deeper backbones and larger range of shots. The EST configuration that worked best for us yields a minimal reduction in the embedding dimensionality, and primarily favours maximizing the inter-class variance, with the term that minimizes the intra-class variance having minimal effect.
In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and we perform early stopping and model selection on the validation set of classes, where the validation performance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trained on. All models are tested on a held-out test set of classes that is not seen during pre-training nor episodic fine-tuning, on 5-way episodes of different shot settings.
Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on test episodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses the performance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and 5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that the episodic fine-tuning phase may serve is to specialize the pre-trained model for performing well on tasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specialization comes at the cost of severely reduced performance on tasks of very different shots. For instance, the model that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shot test tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice to perform well in all settings either, since k = 15 there, it performs really poorly on 1-shot for instance.
In practice, it may be desirable to perform well on more than a single shot setting at test time, without having to fine-tune multiple separate shot-specialized models. A reasonable approach to that is
episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that ‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in any given setting, it falls short of the performance of the corresponding shot-specialized model.
Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings (‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONE fine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via its conditioning mechanism, without suffering the trade-offs inherent in naively specializing a model exclusively to any particular shot. We can view a SCONE model as a very compact way of representing multiple shot-specialized models, where the information required for that specialization resides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in this setting which also strives for shot resiliency, but does so by encouraging invariance to the shot setting rather than shot awareness as in SCONE .
5.2 LARGE-SCALE EXPERIMENTS ON META-DATASET
In what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark for few-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct image datasets, including natural images, handwritten characters and sketches. It also defines a generative process for episodes that varies the way and shot across episodes, and within a particular episode varies the shot for different classes, introducing imbalance. The range of shots induced by this episode generator is also larger than what we considered in the previous section. It is a long-tailed distribution under which small and medium shots are more likely but it is possible to also encounter very large shots (e.g. >400), though this would happen very infrequently. We include histograms of the shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. These experiments aim to investigate whether SCONE is effective on this broader shot distribution and imbalanced episodes.
Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we explore different strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights using Prototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodes of varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’) to SCONE episodic fine-tuning (‘SCONE ’). Since SCONE uses L2-regularization on the sets of FiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning with L2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of our method that does not use any smoothing of the shot distribution. Finally, we compare to EST as well where we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ Prototypical Network variant, since that worked best. We tuned EST’s hyperparameters very extensively, as
described in Section 5.1, this time model-selecting on the validation sets of all datasets of MetaDataset. The values that worked best in this case are d = 480 and ρ = 5e− 9. As noted in Section 5.1, these are substantially different than those used in the original EST paper, likely due to our deeper backbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune on best k-shot’ baseline described in Section 5.1. In this case we found that the best k was 20.
We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets. We set SCONE ’s MAX-SHOT to 200. We tune the learning rate and decay schedule separately for each variant and we perform model selection of SCONE ’s hyperparameters using the validation set. All additional details are reported in the Appendix, and we plan to open source our code upon publication.
Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chen et al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followed by an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training a classifier on the set of training classes. This variant is evaluated on few-shot episodes by discarding the ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inference algorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trained embeddings on the episodic objective of the aforementioned nearest-centroid algorithm.
When training on all datasets of Meta-Dataset, they obtained strong results using their Classifier-Baseline which is in this case trained in a multi-task setup with separate output heads for the different datasets. They found that episodically fine-tuning that solution on all datasets did not help in general (it improved performance on some datasets but hurt performance on a larger number of datasets).
Inspired by that finding, we experimented with a SCONE training phase on top of Classifier-Baseline’s strong pre-trained solution where we froze the embedding weights to that powerful representation and we optimized only the set of SCONE ’s FiLM parameters for shot conditioning. We performed this fine-tuning on training episodes from all datasets, using MetaBaseline’s nearest centroid method as the episodic model. As a control experiment, we performed the same episodic fine-tuning but without shot-conditioning, where we optimized only the batch normalization parameters, keeping the remainder of the
embedding weights frozen (‘Control’). This control can be thought of as a special case of SCONE where MAX-SHOT is set to 1.
Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their more heavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computation). Following (Triantafillou et al., 2020), we run a test of statistical significance described in the Appendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperforms standard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing the L2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when not using SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablation of SCONE that does not use any smoothing of the shot distribution is also competitive, but performs worse than full SCONE . We also observe that EST is competitive in this setting, only slightly worse than SCONE , though we note that SCONE is a more general approach that is not tied to Gaussian classifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuning the batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), but using SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in this setting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness on different shot distributions, and in different backbones.
FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projection (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). As expected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective of the fact that they rely on similar features for classification.
Example smoothed shot distribution To gain an intuition on the effect of our smoothing procedure, we illustrate in Figure 4 the result of smoothing an example shot distribution using m = 1 − 1e − 06, which is the value of the smoothing hyperparameter that we used for our Prototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-way episode where the shots for the four classes are: 1, 10, 23, and 103. We observe that the largest peak is in the range of small values, due to the first three shots of the episode, with the fourth shot causing a second peak around the value 103. As a reminder, this shot distribution defines the weights of the convex combination of FiLM parameters that will be used for the episode. In practice therefore, we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictly activating only the FiLM parameters of the observed shots.
6 CONCLUSION
In summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of a pre-trained model for few-shot classification. We discover that this fine-tuning phase can be used to specialize the pre-trained model to episodes of a given shot, leading to strong performance on test episodes of that shot at the expense of inferior performance on other shots. To eliminate that trade-off, we propose a shot-conditional episodic training approach that trains a model on episodes of a range of shots and can be conditioned at test time to modify its behavior appropriately depending on the shot of the given test episode. Our experimental analysis suggests that our proposed shotconditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scale and diverse Meta-Dataset benchmark, in the context of two different episodic models. Future work could explore how to incorporate shot-awareness in other few-shot classification models. In addition to the architectural modification of FiLM conditioning on the shot distribution, are there algorithmic adjustments that can yield additional performance gains, such as a mechanism of determining the number of inner-loop updates to perform for gradient-based meta-learners based on the number of available shots?
A APPENDIX
SCONE ’S TRAINING ALGORITHM IN MORE DETAIL
For clarity, we provide pseudocode for SCONE ’s training algorithm including our procedure for shot smoothing in Algorithm 1. We will also release our code upon publication for reproducibility.
Algorithm 1 SCONE training Input: Distributions of training episodes Ptrain, pre-trained embedding weights θ, pre-trained batch
norm weights γ and β, embedding function f , learning rate (a float), smoothing co-efficient m (a float in the range [0, 1]) and maximum supported shot MAX-SHOT (an int).
Output: Finetuned embedding weights θ′ and FiLM parameters F = {γ′, β′}.
procedure SMOOTH-SHOT(s,m,MAX-SHOT) if s > MAX-SHOT then
s← MAX-SHOT . Cap s to the max supported shot end if s← s− 1 . So that s is in the range [0,MAX-SHOT − 1] s̃← ONE-HOT(s, DEPTH=MAX-SHOT) . Init the smoothed shot for 0 ≤ j ≤ MAX-SHOT do
l← s− j − 1 . The index j slots to the left of s l← ONE-HOT(l, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if l < 0 r ← s+ j + 1 . The index j slots to the right of s r ← ONE-HOT(r, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if r < 0 s̃← s̃+ l + r m← m2 . Adjust the next iteration’s smoothing
end for end procedure
θ′ ← θ . Init the embedding weights from the pre-trained embeddings for 1 ≤ k ≤ MAX-SHOT do . Init the FiLM params from the pre-trained batch norm params
γ′(k)← γ β′(k)← β
end for while validation accuracy is improving do
Sample a training episode with support set S and query set Q Let k1, . . . kN be the shots of the episode’s classes. s← ZEROS(MAX-SHOT) . Init the (unnormalized) shot distribution for each class i do
si ← ONE-HOT(ki, DEPTH = MAX-SHOT) si ← SMOOTH-SHOT(si,m,MAX-SHOT) . Smooth the one-hot shot of class i s← s+ si end for s← s÷ SUM(s) . Normalize to get the episode’s shot distribution γ′s ← sT γ′ . Select the FiLM params for the episode β′s ← sTβ′ Let SH = {f(x; θ′, γ′s, β′s), y}(x,y)∈S . The embedded support set Let QH = {f(x; θ′, γ′s, β′s), y}(x,y)∈Q . The embedded query set L ← 1|QH | ∑ (h∗,y∗)∈QH − log p(y∗ | h∗,SH) . Compute the episode’s loss θ′ ← θ′ − ∂L ∂θ′ . Update the model via gradient descent γ′ ← γ′ − ∂L ∂γ′ β′ ← β′ − ∂L ∂β′
end while
Hypothesis testing We follow the same procedure as in (Triantafillou et al., 2020) to compute ranks for different methods that in turn determine which entries to bold in our tables. Specifically, we perform a 95% confidence interval statistical test on the difference between the mean accuracies of pairs of entries of each row. If for two entries we are not able to reject the null hypothesis that the difference between their means is 0, they will receive the same rank. For example, if model A and model B are tied for the first place according to that test, they will each receive the rank 1.5 (the average of the ranks 1 and 2). If we are able to reject that hypothesis, however, the entry with the larger mean accuracy will receive a higher rank than the other. In each row, we bold the entries that are tied for the highest rank. For convenience, in the last section of the Appendix, we show a more heavily-annotated copy of each table in the paper to make the rank computation procedure more transparent.
ADDITIONAL RESULTS
First, we provide in Figure 5 additional plots to cover more evaluation shot settings than those shown in Figure 2 in the main paper. The setup for this is exactly the same as for Figure 2.
Next, since we observe that the ‘Best k-shot’ baseline performs well in Table 1, which reflects average performance across episodes of varying shots, we further break down its performance different ranges of shots in Figure 6. We find that while indeed ‘Best k-shot’ performs well for large shots, it actually performs poorly for low shots. This finding strengthens our case against this baseline: not only is it computationally expensive, requiring training multiple different models to pick the one that performs best, but is also is not as consistent as SCONE in its performance on different shot ranges.
Finally, to place our results into context, we display the results of Meta-Baseline with SCONE alongside the performance of recent work, controlling for model capacity, in Table 3. The approaches we compare against are: Classifier-Baseline (Chen et al., 2020), SUR-pf (Dvornik et al., 2020), TaskNorm (Bronskill et al., 2020) and Simple CNAPs (Bateni et al., 2020). In particular, we report the performance of the parametric family of SUR (‘SUR-pf’) instead of full SUR (which has 8x more parameters), in order to make apples-to-apples comparisons with the remaining approaches. We find that the Meta-Baseline method, when combined with SCONE, achieves state-of-the-art on Meta-Dataset in this context, according to the average rank metric.
EXPERIMENTAL DETAILS
We plan to open source our code upon publication, including all experimental details. In the meantime, we outline these details below for completeness.
Architecture We use ResNet-18 as the feature extractor for all of our experiments, following the implementation in (Triantafillou et al., 2020). For the SCONE variants, we add FiLM to all of the batch normalization layers throughout the network.
Image processing For all experiments, we use Meta-Dataset’s input pipeline to obtain images, and we follow the image processing performed in (Chen et al., 2020) which yields images of size 128 x 128. We apply standard data augmentation consisting of horizontal flipping and random cropping followed by standardization using a commonly-used mean and standard deviation as in (Chen et al., 2020). For episodic models, data augmentation is applied in both the support and query sets. No data augmentation is used at validation nor test time.
Optimization We use ADAM with exponential learning rate decay and weight decay of 1e− 8 to optimize all models in this work. We tune the initial learning rate, the decay rate, and the number of updates between each learning rate decay separately for each model presented in the paper. The initial learning rate values we considered are 0.0005 and 0.001, with a decay factor of 0.8 or 0.9 applied every 1000, 2000, 3000 steps. We ran a variant for every combination of those values. We also tune the weight decay applied to the FiLM parameters (for SCONE variants) or the batch normalization parameters (for non-SCONE variants). We tried the values: 1e− 8, 1e− 6, 1e− 4.
SCONE hyperparameters For the SCONE variants, aside from the above hyperparameters, we additionally tune the smoothing parameter m described in the main paper that is used for training and for evaluation. We did not tune the MAX-SHOT hyperparameter mentioned in the main paper as we found that our initial choices worked reasonably. Specifically, we set it to 40 for the smaller-scale experiments where the maximum shot was 40, and to 200 for the large-scale experiments. The latter choice was performed heuristically since shots much larger than 200 are unlikely under the shot distribution induced by Meta-Dataset’s episode generator. For more information on that shot distribution, we refer the reader to the next section.
SCONE smoothing hyperparameter We tuned the value of this hyperparameter that will be used both at training and at evaluation. At training time, we considered values in the range 0, 0.2, 0.4, 0.6, 0.9 for Prototypical Network experiments, and we picked the variant that worked best according to the validation performance that was computed without smoothing. Once the model was trained and all of the remaining hyperparamters were tuned, we performed a final validation round to tune the evaluation-time smoothing that will be used in the chosen model. We found it beneficial to use larger values here, picking the value of 1− 1e− 06 for example for the Prototypical Network on ImageNet. In the Meta-Baseline codebase, we trained with larger values of smoothing (the best we found was 1− 1e− 10) and didn’t find it beneficial to additionally smooth at evaluation time.
Model selection For each experiment, we perform early stopping according to the performance on the validation set. For the models that train on a single shot k in the smaller-scale experiments, the validation performance that we monitor for early stopping is the average query set accuracy on k-shot 5-way episodes drawn from the validation set. For the models in the small-scale experiments that train on a distribution of shots, we use the average validation performance over 5-way episodes whose shot is sampled according to the same distribution used for training the respective model. For the larger-scale Meta-Dataset experiments, we draw validation episodes only from the validation set of ImageNet for the experiments that train on ImageNet only, or from the validation sets of all datasets for the experiments that train on all datasets. In both cases, the validation episodes are drawn using Meta-Dataset’s episode generator that yields episodes of variable ways and variable shots with class imbalance. In all cases, the average validation performance is computed over 600 validation episodes and is monitored every 2K training updates. We apply exponential smoothing to the resulting validation "curve" (using the default value of 0.6 in TensorBoard). Then, we choose the update step at which the highest peak of that curve is found and we use the checkpoint corresponding to that update step for testing.
DISTRIBUTION OF SHOTS IN META-DATASET EPISODES
For reference, Figure 7 displays histograms of the number of shots produced by Meta-Dataset’s episode sampling algorithm. These are computed by sampling 600 episodes per dataset for each of the training, validation and test splits of Meta-Dataset.
TABLES WITH MORE DETAILED RANKS
In this section, we include a copy of the same tables appearing previously in the paper, but additionally annotated with per-row ranks, to make the rank computation method more transparent. These tables are Table 4, Table 5 and Table 6. | 1. What is the focus of the paper regarding episodic fine-tuning in neural networks?
2. What is the main contribution of the proposed training objective and how does it differ from prior works?
3. How does the reviewer assess the novelty and originality of the proposed approach compared to existing works?
4. Are there any concerns regarding the experiments and comparisons presented in the paper? | Review | Review
This paper aims to understand the role of this episodic fine-tuning phase and discovers that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots. It proposes a shot-conditional form of episodic fine-tuning.
The main contribution of this work is the training objective, which varying the shots by introducing a distribution over shots
P
k
. In addition, the model parameters are not separately or independently maintained for different shots. The author proposed a conditioning mechanism using FiLM, where
k
is a conditional variable or input to the FiLM network (
γ
and
β
). The idea of shot-specific feature extractors is not new, and it is a common trick in amortized variational inference to reduce the number of model parameters.
In the experimental section, I did not see other baselines in previous works. The comparisons are mostly against the two methods named standard and L2 BN, which seem to be simple variation of Prototypical Networks. Even the proposed SCONE is a variation of Prototypical Networks, it also reduces the novelty or originality.
I am not familiar with this area or related research papers, but was assigned to review this paper. My evaluation is only based on the my understanding of the contents including in the manuscript. I would like to see the comments from other reviewers in this domain. |
ICLR | Title
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Abstract
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (“pre-training”) followed by episodic finetuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the “shot” setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
1 INTRODUCTION
Few-shot classification is the problem of learning a classifier using only a few examples. Specifically, the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’ learn about new classes from few examples. Success is evaluated on a number of test episodes, each posing a classification task between previously-unseen test classes. In each such episode, we are given a few examples, or “shots”, of each new class that can be used to adapt this model to the task at hand, and the objective is to correctly classify a held-out set of examples of the new classes.
A simple approach to this problem is to learn a classifier over the training classes, parameterized as a neural network feature extractor followed by a classification layer. While the classification layer is not useful at test time due to the class shift, the embedding weights that are learned during this “pre-training” phase evidently constitute a strong representation that can be used to tackle test tasks when paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to make predictions for each example in the test episode given the episode’s small training set. Alternatively, early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training, a regime where the training objective is expressed in terms of performance on a number of training episodes of the same structure as the test episodes, but with the classes sampled from the training set. It was hypothesized that this episodic approach captures a more appropriate inductive bias for the problem of few-shot classification and would thus lead to better generalization.
However, there is an ongoing debate about whether episodic training is in fact required for obtaining the best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al., 2020) proposed strong “pre-training” baselines that leverage common best practices for supervised training (e.g. normalization schemes, data augmentation) to obtain a powerful representation that works well for this task. Interestingly, other recent work combines the pre-training of a single classifier with episodic fine-tuning by removing the classification head and continuing to train the embedding network using the episodic inference algorithm that will be applied at test time (Triantafillou et al., 2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes
have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what is the nature of the modification it induces into the pre-trained solution? Under which conditions is it required in order to achieve the best performance?
As a step towards answering those questions, we investigate the effect of the shot used during episodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We are particularly interested in understanding whether the shot of the training episodes constitutes a source of information that the model can leverage to improve its few-shot classification performance on episodes of that shot at test time. Our analysis reveals that indeed a particular functionality that this fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particular shot; accomplished by performing the fine-tuning on episodes of that shot. However, perhaps unsurprisingly, we find that specializing to a given shot comes at the expense of hurting performance for other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of Prototypical Networks (Snell et al., 2017) where inferior performance was reported when the shot at training time did not match the shot at test time.
Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shotspecialization help us in practice? It is unrealistic to assume that we will always have the same number of labeled examples for every new class we hope to learn at test time, so we are interested in approaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune a separate episodic model for every shot, and intuitively that seems wasteful as we expect that tasks of similar shots should require similar models. Motivated by this, we propose to train a single shot-conditional model for specializing the pre-trained solution to a wide spectrum of shots without suffering trade-offs. This leads to a compact but flexible model that can be conditioned to be made appropriate for the shot appearing in each test episode.
In what follows we provide some background on few-shot classification and episodic models and then introduce our proposed shot-conditioning approach and related work. We then present our experimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe that our shot-conditional training approach is beneficial for obtaining a general flexible model that does not suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experiment with our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shot classification, and demonstrate its effectiveness in that challenging environment.
2 BACKGROUND
Problem definition Few-shot classification aims to classify test examples of unseen classes from a small labeled training set. The standard evaluation procedure involves sampling classification episodes by picking N classes at random from a test set of classes Ctest and sampling two disjoint sets of examples from the N chosen classes: a support set (or training set) of k labeled examples per class, and a query set (or test set) of unlabeled examples, forming N -way, k-shot episodes. The model is allowed to use the support set, in addition to knowledge acquired while training on a disjoint set of classes Ctrain, to make a prediction for examples in the query set, and is evaluated on its query set accuracy averaged over multiple test episodes.
Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate under the assumption that obtaining a model capable of few-shot classification requires training it on (mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standard supervised learning. These learning episodes are sampled in the same way as described above for test episodes, but with classes sampled from Ctrain this time. In other words, the model is trained to minimize a loss of the form:
ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθ(y∗ | x∗,S) (1) where S and Q are support and query sets sampled from the distribution PN,ktrain of N -way, k-shot training episodes induced by Ctrain, and θ represents the model’s parameters. This training regime is often characterized as meta-learning or learning to learn, i.e. learning over many episodes how to learn within an episode (from few labeled examples). Episodic models differ by their “inference
algorithm”, i.e. the manner in which pθ(y∗ | x∗,S) is computed to classify query examples based on the support set.
Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodic model which constructs a prototype φc for each class c in an episode as
φc = 1 |Sc| ∑ x∈Sc fθ(x), (2)
where f is an embedding function parametrized by θ and Sc represents the set of support examples belonging to class c, and classifies a given query example as
p(y∗ = c | x∗,S) = exp(−||x ∗ − φc||22)∑
c′ exp(−||x∗ − φc′ ||22) . (3)
3 SHOT CONDITIONAL EPISODIC (SCONE ) TRAINING
In this section we introduce Shot CONditional Episodic (SCONE ) training for the purpose of specializing a strong pre-trained model to solving few-shot classification tasks of a range of different shots, without suffering disproportionately for any shot.
Training objective Training episodically involves minimizing the objective shown in Equation 1. We first sample an episode from P k,Ntrain and compute a prediction pθ(y
∗ | x∗,S) for each query example x∗. We then compute the cross-entropy loss on the query set using those predictions and perform a parameter update by backpropagating its gradient with respect to θ into the inference algorithm. In this work we concern ourselves with models that use an embedding function fθ to obtain a representation for the support and query examples of each episode on top of which the inference algorithm is applied. In Prototypical Networks, for instance, fθ contains all of the model’s learnable parameters.
SCONE trains on episodes of varying shots and conditions the model on each episode’s shot distribution. (Figure 1) by minimizing
Ek∼Pk ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθk(y∗ | x∗,S) , (4) where Pk is the distribution over shots at training time and θk depends on an episode’s sampled shots. In the Appendix, we include an algorithm box outlining SCONE fine-tuning.
Conditioning mechanism Rather than learning a separate set of model parameters for each shot setting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioning mechanism which performs an affine feature-wise transformation of its input x based on conditioning information k (in our case, the episode’s number of shots):
FiLM(x) = γ(k) x+ β(k). (5)
The dependency of γ and β on k is handled by maintaining distinct values for each shot setting and selecting the appropriate γ and β based on an episode’s shot. Equivalently, we can think of our approach as a compact representation of many shot-specific feature extractors which share all but their FiLM layer parameters.
More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX-SHOT] range (where MAX-SHOT is a hyperparameter) and let all shots settings greater than or equal to MAXSHOT share the same FiLM parametrization. As is often the case in practice, instead of inserting FiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values of existing batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performing episodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training (i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLM parameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial to penalize the L2-norm of β (regularizing the offset towards 0) and the L2 norm of γ − 1 (regularizing the scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength of this FiLM weight decay.
Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, where different classes have different shots. In that case, instead of selecting a single set of FiLM parameters, we compute the FiLM parameters for an episode as the convex combination of the FiLM parameters associated with all shots found in the episode, where the weights of that combination are determined based on the frequency with which each shot appears in the episode.
Concretely, the episode’s “shot distribution” s (a vector of length MAX-SHOT) is obtained by averaging the one-hot representations of the shots of the classes appearing in an episode. In the special case of a class-balanced episode, the resulting average will be exactly a one-hot vector. This shot distribution is then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as an embedding lookup sTF in a matrix F of FiLM parameters using a shot distribution s.
Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters, which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTHSHOT procedure in the Appendix in Algorithm 1, which receives the shot s of a class (an integer), and a smoothing hyperparameter m (a float in [0, 1]) and returns the smoothed shot for that class, which is a vector of length MAX-SHOT. Essentially, the result of smoothing is that the returned vector representation of s is not strictly one-hot with only the position corresponding to the observed shot s being ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entries that are directly adjacent to s receive the value m, the entries two spots away from s the value m2, and so on, with entries further away from s receiving exponentially-decaying values.
4 RELATED WORK
Few-shot classification A plethora of models have been recently proposed for few-shot classification, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodic training was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015; Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is made via nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NN algorithm where the label of a query example is predicted to be the weighted average of the (one-hot) support labels with the weights determined by the similarity of that query to each support example.
Gradient-based episodic models are another popular family of approaches following the influential MAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunes the embedding weights along with a linear classifier head using gradient descent on the support set.
Intuitively, this results in learning an embedding space that serves as a useful starting point from which a few steps of gradient descent suffice to adapt the model to each episode’s classification task. Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier for each episode from the prototypes of the classes appearing in that episode.
Recently, the field has shifted towards studying few-shot classification in more realistic environments like tiered-ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which has encouraged research into newly-introduced challenges, such as accounting for multiple diverse datasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel task conditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach, and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each test episode out of a universal feature representation.
Understanding episodic learning Our work inscribes itself in a recent line of work attempting to understand the differences between episodic and non-episodic learning. Goldblum et al. (2020) attempts to understand episodic learning from the perspective of how classes cluster in feature-space (for models that learn a final classification layer on top of a feature extractor) as well as from the perspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chao et al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskill et al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use in non-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in their ability to generalize to new examples of previously seen classes or new examples of unseen classes. Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks to explain the observed performance drop when there is a mismatch between the shots at training and test time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of a pre-trained solution, in a larger-scale and more diverse environment.
Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) are used as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for a survey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the loss at each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on a scalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as a means of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) use FiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) uses it as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has also been used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shot learning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019) and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of the support set for this and thus the ‘shot’ information is discarded. The purpose of our conditioning mechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners is inspired by recent work that investigates loss-conditional training using feature-wise transformations (Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).
5 EXPERIMENTS
5.1 EXPLORING THE ROLE OF ‘SHOTS‘ DURING EPISODIC FINE-TUNING
In this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuning phase, and in particular how it impacts the resulting model’s ability to solve test episodes of different shots. We consider either using a fixed shot k throughout the fine-tuning phase, or fine-tuning on episodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well as SCONE fine-tuning that equips the model with the shot-conditioning mechanism described in the previous section. We also compare against EST (Cao et al., 2020).
Experimental setup We ran this round of experiments on ImageNet using the class splits proposed in Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet. We then removed the topmost classification layer, leaving us with a pre-trained backbone that we used as the initialization for the subsequent episodic fine-tuning round. We ran the following
variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusively on 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from the range [1, 40] (‘Fine-tune on all shots’), and on episodes with that same shot distribution but using SCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shot conditioning mechanism described in the previous section. We also consider ‘Fine-tune on best k-shot’, an additional baseline that fine-tunes exclusively on the shot k that is found to work best on average on the validation set (on the range of shots 1 − 40). For this, we trained models for k = 1, 5, 10, 15, 20, 30, 40 and found the best to be k = 15.
As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm of FiLM parameters. For a fair comparison with the other models, we applied the same regularization to the batch normalization parameters of all models during the episodic fine-tuning phase, and we found this to be generally helpful. We tuned the strength of this regularization separately for each model and picked the variant that worked best on the validation set, which we report in the Appendix. We set the SCONE ’s MAX-SHOT hyperparameter to be 40 for this experiment.
We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for building shot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of the learned embeddings that aims to strike a good balance between maximizing the inter-class variance and minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter ρ. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned ρ and the hyperparameter d controlling the projection dimensionality very extensively. The values that we found worked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantially different than those used in the original EST paper: d = 480 and ρ = 5e − 8 (versus the original d = 120 and ρ = 1e− 3). We believe that this discrepancy may be due to our deeper backbones and larger range of shots. The EST configuration that worked best for us yields a minimal reduction in the embedding dimensionality, and primarily favours maximizing the inter-class variance, with the term that minimizes the intra-class variance having minimal effect.
In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and we perform early stopping and model selection on the validation set of classes, where the validation performance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trained on. All models are tested on a held-out test set of classes that is not seen during pre-training nor episodic fine-tuning, on 5-way episodes of different shot settings.
Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on test episodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses the performance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and 5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that the episodic fine-tuning phase may serve is to specialize the pre-trained model for performing well on tasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specialization comes at the cost of severely reduced performance on tasks of very different shots. For instance, the model that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shot test tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice to perform well in all settings either, since k = 15 there, it performs really poorly on 1-shot for instance.
In practice, it may be desirable to perform well on more than a single shot setting at test time, without having to fine-tune multiple separate shot-specialized models. A reasonable approach to that is
episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that ‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in any given setting, it falls short of the performance of the corresponding shot-specialized model.
Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings (‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONE fine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via its conditioning mechanism, without suffering the trade-offs inherent in naively specializing a model exclusively to any particular shot. We can view a SCONE model as a very compact way of representing multiple shot-specialized models, where the information required for that specialization resides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in this setting which also strives for shot resiliency, but does so by encouraging invariance to the shot setting rather than shot awareness as in SCONE .
5.2 LARGE-SCALE EXPERIMENTS ON META-DATASET
In what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark for few-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct image datasets, including natural images, handwritten characters and sketches. It also defines a generative process for episodes that varies the way and shot across episodes, and within a particular episode varies the shot for different classes, introducing imbalance. The range of shots induced by this episode generator is also larger than what we considered in the previous section. It is a long-tailed distribution under which small and medium shots are more likely but it is possible to also encounter very large shots (e.g. >400), though this would happen very infrequently. We include histograms of the shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. These experiments aim to investigate whether SCONE is effective on this broader shot distribution and imbalanced episodes.
Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we explore different strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights using Prototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodes of varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’) to SCONE episodic fine-tuning (‘SCONE ’). Since SCONE uses L2-regularization on the sets of FiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning with L2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of our method that does not use any smoothing of the shot distribution. Finally, we compare to EST as well where we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ Prototypical Network variant, since that worked best. We tuned EST’s hyperparameters very extensively, as
described in Section 5.1, this time model-selecting on the validation sets of all datasets of MetaDataset. The values that worked best in this case are d = 480 and ρ = 5e− 9. As noted in Section 5.1, these are substantially different than those used in the original EST paper, likely due to our deeper backbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune on best k-shot’ baseline described in Section 5.1. In this case we found that the best k was 20.
We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets. We set SCONE ’s MAX-SHOT to 200. We tune the learning rate and decay schedule separately for each variant and we perform model selection of SCONE ’s hyperparameters using the validation set. All additional details are reported in the Appendix, and we plan to open source our code upon publication.
Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chen et al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followed by an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training a classifier on the set of training classes. This variant is evaluated on few-shot episodes by discarding the ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inference algorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trained embeddings on the episodic objective of the aforementioned nearest-centroid algorithm.
When training on all datasets of Meta-Dataset, they obtained strong results using their Classifier-Baseline which is in this case trained in a multi-task setup with separate output heads for the different datasets. They found that episodically fine-tuning that solution on all datasets did not help in general (it improved performance on some datasets but hurt performance on a larger number of datasets).
Inspired by that finding, we experimented with a SCONE training phase on top of Classifier-Baseline’s strong pre-trained solution where we froze the embedding weights to that powerful representation and we optimized only the set of SCONE ’s FiLM parameters for shot conditioning. We performed this fine-tuning on training episodes from all datasets, using MetaBaseline’s nearest centroid method as the episodic model. As a control experiment, we performed the same episodic fine-tuning but without shot-conditioning, where we optimized only the batch normalization parameters, keeping the remainder of the
embedding weights frozen (‘Control’). This control can be thought of as a special case of SCONE where MAX-SHOT is set to 1.
Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their more heavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computation). Following (Triantafillou et al., 2020), we run a test of statistical significance described in the Appendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperforms standard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing the L2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when not using SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablation of SCONE that does not use any smoothing of the shot distribution is also competitive, but performs worse than full SCONE . We also observe that EST is competitive in this setting, only slightly worse than SCONE , though we note that SCONE is a more general approach that is not tied to Gaussian classifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuning the batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), but using SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in this setting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness on different shot distributions, and in different backbones.
FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projection (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). As expected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective of the fact that they rely on similar features for classification.
Example smoothed shot distribution To gain an intuition on the effect of our smoothing procedure, we illustrate in Figure 4 the result of smoothing an example shot distribution using m = 1 − 1e − 06, which is the value of the smoothing hyperparameter that we used for our Prototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-way episode where the shots for the four classes are: 1, 10, 23, and 103. We observe that the largest peak is in the range of small values, due to the first three shots of the episode, with the fourth shot causing a second peak around the value 103. As a reminder, this shot distribution defines the weights of the convex combination of FiLM parameters that will be used for the episode. In practice therefore, we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictly activating only the FiLM parameters of the observed shots.
6 CONCLUSION
In summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of a pre-trained model for few-shot classification. We discover that this fine-tuning phase can be used to specialize the pre-trained model to episodes of a given shot, leading to strong performance on test episodes of that shot at the expense of inferior performance on other shots. To eliminate that trade-off, we propose a shot-conditional episodic training approach that trains a model on episodes of a range of shots and can be conditioned at test time to modify its behavior appropriately depending on the shot of the given test episode. Our experimental analysis suggests that our proposed shotconditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scale and diverse Meta-Dataset benchmark, in the context of two different episodic models. Future work could explore how to incorporate shot-awareness in other few-shot classification models. In addition to the architectural modification of FiLM conditioning on the shot distribution, are there algorithmic adjustments that can yield additional performance gains, such as a mechanism of determining the number of inner-loop updates to perform for gradient-based meta-learners based on the number of available shots?
A APPENDIX
SCONE ’S TRAINING ALGORITHM IN MORE DETAIL
For clarity, we provide pseudocode for SCONE ’s training algorithm including our procedure for shot smoothing in Algorithm 1. We will also release our code upon publication for reproducibility.
Algorithm 1 SCONE training Input: Distributions of training episodes Ptrain, pre-trained embedding weights θ, pre-trained batch
norm weights γ and β, embedding function f , learning rate (a float), smoothing co-efficient m (a float in the range [0, 1]) and maximum supported shot MAX-SHOT (an int).
Output: Finetuned embedding weights θ′ and FiLM parameters F = {γ′, β′}.
procedure SMOOTH-SHOT(s,m,MAX-SHOT) if s > MAX-SHOT then
s← MAX-SHOT . Cap s to the max supported shot end if s← s− 1 . So that s is in the range [0,MAX-SHOT − 1] s̃← ONE-HOT(s, DEPTH=MAX-SHOT) . Init the smoothed shot for 0 ≤ j ≤ MAX-SHOT do
l← s− j − 1 . The index j slots to the left of s l← ONE-HOT(l, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if l < 0 r ← s+ j + 1 . The index j slots to the right of s r ← ONE-HOT(r, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if r < 0 s̃← s̃+ l + r m← m2 . Adjust the next iteration’s smoothing
end for end procedure
θ′ ← θ . Init the embedding weights from the pre-trained embeddings for 1 ≤ k ≤ MAX-SHOT do . Init the FiLM params from the pre-trained batch norm params
γ′(k)← γ β′(k)← β
end for while validation accuracy is improving do
Sample a training episode with support set S and query set Q Let k1, . . . kN be the shots of the episode’s classes. s← ZEROS(MAX-SHOT) . Init the (unnormalized) shot distribution for each class i do
si ← ONE-HOT(ki, DEPTH = MAX-SHOT) si ← SMOOTH-SHOT(si,m,MAX-SHOT) . Smooth the one-hot shot of class i s← s+ si end for s← s÷ SUM(s) . Normalize to get the episode’s shot distribution γ′s ← sT γ′ . Select the FiLM params for the episode β′s ← sTβ′ Let SH = {f(x; θ′, γ′s, β′s), y}(x,y)∈S . The embedded support set Let QH = {f(x; θ′, γ′s, β′s), y}(x,y)∈Q . The embedded query set L ← 1|QH | ∑ (h∗,y∗)∈QH − log p(y∗ | h∗,SH) . Compute the episode’s loss θ′ ← θ′ − ∂L ∂θ′ . Update the model via gradient descent γ′ ← γ′ − ∂L ∂γ′ β′ ← β′ − ∂L ∂β′
end while
Hypothesis testing We follow the same procedure as in (Triantafillou et al., 2020) to compute ranks for different methods that in turn determine which entries to bold in our tables. Specifically, we perform a 95% confidence interval statistical test on the difference between the mean accuracies of pairs of entries of each row. If for two entries we are not able to reject the null hypothesis that the difference between their means is 0, they will receive the same rank. For example, if model A and model B are tied for the first place according to that test, they will each receive the rank 1.5 (the average of the ranks 1 and 2). If we are able to reject that hypothesis, however, the entry with the larger mean accuracy will receive a higher rank than the other. In each row, we bold the entries that are tied for the highest rank. For convenience, in the last section of the Appendix, we show a more heavily-annotated copy of each table in the paper to make the rank computation procedure more transparent.
ADDITIONAL RESULTS
First, we provide in Figure 5 additional plots to cover more evaluation shot settings than those shown in Figure 2 in the main paper. The setup for this is exactly the same as for Figure 2.
Next, since we observe that the ‘Best k-shot’ baseline performs well in Table 1, which reflects average performance across episodes of varying shots, we further break down its performance different ranges of shots in Figure 6. We find that while indeed ‘Best k-shot’ performs well for large shots, it actually performs poorly for low shots. This finding strengthens our case against this baseline: not only is it computationally expensive, requiring training multiple different models to pick the one that performs best, but is also is not as consistent as SCONE in its performance on different shot ranges.
Finally, to place our results into context, we display the results of Meta-Baseline with SCONE alongside the performance of recent work, controlling for model capacity, in Table 3. The approaches we compare against are: Classifier-Baseline (Chen et al., 2020), SUR-pf (Dvornik et al., 2020), TaskNorm (Bronskill et al., 2020) and Simple CNAPs (Bateni et al., 2020). In particular, we report the performance of the parametric family of SUR (‘SUR-pf’) instead of full SUR (which has 8x more parameters), in order to make apples-to-apples comparisons with the remaining approaches. We find that the Meta-Baseline method, when combined with SCONE, achieves state-of-the-art on Meta-Dataset in this context, according to the average rank metric.
EXPERIMENTAL DETAILS
We plan to open source our code upon publication, including all experimental details. In the meantime, we outline these details below for completeness.
Architecture We use ResNet-18 as the feature extractor for all of our experiments, following the implementation in (Triantafillou et al., 2020). For the SCONE variants, we add FiLM to all of the batch normalization layers throughout the network.
Image processing For all experiments, we use Meta-Dataset’s input pipeline to obtain images, and we follow the image processing performed in (Chen et al., 2020) which yields images of size 128 x 128. We apply standard data augmentation consisting of horizontal flipping and random cropping followed by standardization using a commonly-used mean and standard deviation as in (Chen et al., 2020). For episodic models, data augmentation is applied in both the support and query sets. No data augmentation is used at validation nor test time.
Optimization We use ADAM with exponential learning rate decay and weight decay of 1e− 8 to optimize all models in this work. We tune the initial learning rate, the decay rate, and the number of updates between each learning rate decay separately for each model presented in the paper. The initial learning rate values we considered are 0.0005 and 0.001, with a decay factor of 0.8 or 0.9 applied every 1000, 2000, 3000 steps. We ran a variant for every combination of those values. We also tune the weight decay applied to the FiLM parameters (for SCONE variants) or the batch normalization parameters (for non-SCONE variants). We tried the values: 1e− 8, 1e− 6, 1e− 4.
SCONE hyperparameters For the SCONE variants, aside from the above hyperparameters, we additionally tune the smoothing parameter m described in the main paper that is used for training and for evaluation. We did not tune the MAX-SHOT hyperparameter mentioned in the main paper as we found that our initial choices worked reasonably. Specifically, we set it to 40 for the smaller-scale experiments where the maximum shot was 40, and to 200 for the large-scale experiments. The latter choice was performed heuristically since shots much larger than 200 are unlikely under the shot distribution induced by Meta-Dataset’s episode generator. For more information on that shot distribution, we refer the reader to the next section.
SCONE smoothing hyperparameter We tuned the value of this hyperparameter that will be used both at training and at evaluation. At training time, we considered values in the range 0, 0.2, 0.4, 0.6, 0.9 for Prototypical Network experiments, and we picked the variant that worked best according to the validation performance that was computed without smoothing. Once the model was trained and all of the remaining hyperparamters were tuned, we performed a final validation round to tune the evaluation-time smoothing that will be used in the chosen model. We found it beneficial to use larger values here, picking the value of 1− 1e− 06 for example for the Prototypical Network on ImageNet. In the Meta-Baseline codebase, we trained with larger values of smoothing (the best we found was 1− 1e− 10) and didn’t find it beneficial to additionally smooth at evaluation time.
Model selection For each experiment, we perform early stopping according to the performance on the validation set. For the models that train on a single shot k in the smaller-scale experiments, the validation performance that we monitor for early stopping is the average query set accuracy on k-shot 5-way episodes drawn from the validation set. For the models in the small-scale experiments that train on a distribution of shots, we use the average validation performance over 5-way episodes whose shot is sampled according to the same distribution used for training the respective model. For the larger-scale Meta-Dataset experiments, we draw validation episodes only from the validation set of ImageNet for the experiments that train on ImageNet only, or from the validation sets of all datasets for the experiments that train on all datasets. In both cases, the validation episodes are drawn using Meta-Dataset’s episode generator that yields episodes of variable ways and variable shots with class imbalance. In all cases, the average validation performance is computed over 600 validation episodes and is monitored every 2K training updates. We apply exponential smoothing to the resulting validation "curve" (using the default value of 0.6 in TensorBoard). Then, we choose the update step at which the highest peak of that curve is found and we use the checkpoint corresponding to that update step for testing.
DISTRIBUTION OF SHOTS IN META-DATASET EPISODES
For reference, Figure 7 displays histograms of the number of shots produced by Meta-Dataset’s episode sampling algorithm. These are computed by sampling 600 episodes per dataset for each of the training, validation and test splits of Meta-Dataset.
TABLES WITH MORE DETAILED RANKS
In this section, we include a copy of the same tables appearing previously in the paper, but additionally annotated with per-row ranks, to make the rank computation method more transparent. These tables are Table 4, Table 5 and Table 6. | 1. What is the main contribution of the paper regarding few-shot meta learning approaches?
2. What are the strengths of the proposed method, particularly in its ability to generalize across different shots?
3. What are the weaknesses of the paper, especially regarding its limitations in addressing the root cause of overfitting to the number of shots?
4. How does the proposed method compare to other approaches such as Cao et.al. and prototypical networks in terms of its ability to handle episodes of any number of shots?
5. What additional experiments or comparisons could be included in the paper to further demonstrate its effectiveness and generalizability? | Review | Review
Summary
The paper proposes a solution to few-shot meta learning approaches overfitting to the number of shots they are finetuned on, and not generalizing as well as expected to novel shots. In order to mitigate this problem, the paper suggests a parameterization of the meta learner which also conditions on the number of shots the model trains on. In practice, this is done via manipulation of the batch normalization parameters based on the number of shots. With this conditioning, the paper shows that the models perform better across a range of shots that they are evaluated on, compared to various sensible baselines.
Strengths
The paper does tackle an important problem, a meta learner should not be overfitting to the number of shots
The paper has very thorough experiments and numerous visualizations (for e.g. of the learnt batch norm embeddings) to help gain better intuitions on what the method does
The paper is well written and easy to understand
Weaknesses
Important points for the rebuttal are marked with (*)
(*) The problem setting of changing the number of shots during meta-training is still somewhat artificial since one could still test the model on a different number of shots during the Query set evaluation. I wonder if in light of this issue, conditioning on the number of shots is getting at the root of the issue. One would rather want to have methods like Cao et.al. which aim to make the model invariant (or insensitive) to the number of shots as opposed to explicitly conditioning on them. Would be great if the authors could clarify this issue.
(*) Related to the above issue, a disadvantage of the proposed method over prototypical networks is that ProtoNets could be applied to episodes of any number of shots, even those unseen during training. Similarly, Cao et.al. can also be applied to any number of shots since it does a projection of the feature embedding. However the conditioning approach fails in this case, which is somewhat unrealistic. I understand that there is a smoothing step in the shot distribution which could partially help mitigate that issue, but it does not seem satisfying. How do the authors think this issue could be mitigated, and how should one view this as a potential shortcoming of the approach in light of the goals of the paper?
(*) In general, it seems appropriate for the paper to have a comparison to Cao et.al. who propose a solution to shot-overfitting in the context of prototypical networks. While I understand that the proposed approach is more general than that of Cao et.al. that is not sufficiently demonstrated via. the experiments which still focus on the prototypical networks, which is not really an issue, but then warrants comparison to Cao et.al.
(*) It would also be really useful to include a comparison to prototypical networks that does sum of the features instead of the mean to compute the prototype, which would preserve information on how many shots there exists in the support set. In general, it would also serve as a more stringent test of whether shot conditioning in the way the paper does it is needed when the method is able to deal with or recognize when the number of shots has changed, which the prototypical network approach is not able to do.
It would also help to guide discussion on how shot conditioning might look like for different algorithms. For prototypical networks we used FiLM, for something like MAML one could imagine unrolling for a different number of steps based on how many shots there are to prevent overfitting. A discussion on this would help improve the paper.
(*) It would be useful to compare against a single model which trains on “k-shots” and is evaluated on a mixture of k-shots (as in meta-dataset) and search explicitly for the best value of “k” from [1, MAX_SHOTS]. Does the proposed approach still outperform that “best” (over k) model? That seems to be an important baseline comparison.
** POST REBUTTAL UPDATE**
I went through the other reviews and the author rebuttals. I thank the authors for throughly addressing all of my concerns and running some baseline experiments which I think are quite interesting, and add to the completeness of the paper, including a comparison to EST and an optimal K-shot baseline. I think the idea is simple and interesting, and now the paper has enough experimental comparisons to be a valuable addition to the conference. I have updated my score accordingly. |
ICLR | Title
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Abstract
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (“pre-training”) followed by episodic finetuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the “shot” setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
1 INTRODUCTION
Few-shot classification is the problem of learning a classifier using only a few examples. Specifically, the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’ learn about new classes from few examples. Success is evaluated on a number of test episodes, each posing a classification task between previously-unseen test classes. In each such episode, we are given a few examples, or “shots”, of each new class that can be used to adapt this model to the task at hand, and the objective is to correctly classify a held-out set of examples of the new classes.
A simple approach to this problem is to learn a classifier over the training classes, parameterized as a neural network feature extractor followed by a classification layer. While the classification layer is not useful at test time due to the class shift, the embedding weights that are learned during this “pre-training” phase evidently constitute a strong representation that can be used to tackle test tasks when paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to make predictions for each example in the test episode given the episode’s small training set. Alternatively, early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training, a regime where the training objective is expressed in terms of performance on a number of training episodes of the same structure as the test episodes, but with the classes sampled from the training set. It was hypothesized that this episodic approach captures a more appropriate inductive bias for the problem of few-shot classification and would thus lead to better generalization.
However, there is an ongoing debate about whether episodic training is in fact required for obtaining the best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al., 2020) proposed strong “pre-training” baselines that leverage common best practices for supervised training (e.g. normalization schemes, data augmentation) to obtain a powerful representation that works well for this task. Interestingly, other recent work combines the pre-training of a single classifier with episodic fine-tuning by removing the classification head and continuing to train the embedding network using the episodic inference algorithm that will be applied at test time (Triantafillou et al., 2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes
have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what is the nature of the modification it induces into the pre-trained solution? Under which conditions is it required in order to achieve the best performance?
As a step towards answering those questions, we investigate the effect of the shot used during episodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We are particularly interested in understanding whether the shot of the training episodes constitutes a source of information that the model can leverage to improve its few-shot classification performance on episodes of that shot at test time. Our analysis reveals that indeed a particular functionality that this fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particular shot; accomplished by performing the fine-tuning on episodes of that shot. However, perhaps unsurprisingly, we find that specializing to a given shot comes at the expense of hurting performance for other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of Prototypical Networks (Snell et al., 2017) where inferior performance was reported when the shot at training time did not match the shot at test time.
Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shotspecialization help us in practice? It is unrealistic to assume that we will always have the same number of labeled examples for every new class we hope to learn at test time, so we are interested in approaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune a separate episodic model for every shot, and intuitively that seems wasteful as we expect that tasks of similar shots should require similar models. Motivated by this, we propose to train a single shot-conditional model for specializing the pre-trained solution to a wide spectrum of shots without suffering trade-offs. This leads to a compact but flexible model that can be conditioned to be made appropriate for the shot appearing in each test episode.
In what follows we provide some background on few-shot classification and episodic models and then introduce our proposed shot-conditioning approach and related work. We then present our experimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe that our shot-conditional training approach is beneficial for obtaining a general flexible model that does not suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experiment with our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shot classification, and demonstrate its effectiveness in that challenging environment.
2 BACKGROUND
Problem definition Few-shot classification aims to classify test examples of unseen classes from a small labeled training set. The standard evaluation procedure involves sampling classification episodes by picking N classes at random from a test set of classes Ctest and sampling two disjoint sets of examples from the N chosen classes: a support set (or training set) of k labeled examples per class, and a query set (or test set) of unlabeled examples, forming N -way, k-shot episodes. The model is allowed to use the support set, in addition to knowledge acquired while training on a disjoint set of classes Ctrain, to make a prediction for examples in the query set, and is evaluated on its query set accuracy averaged over multiple test episodes.
Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate under the assumption that obtaining a model capable of few-shot classification requires training it on (mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standard supervised learning. These learning episodes are sampled in the same way as described above for test episodes, but with classes sampled from Ctrain this time. In other words, the model is trained to minimize a loss of the form:
ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθ(y∗ | x∗,S) (1) where S and Q are support and query sets sampled from the distribution PN,ktrain of N -way, k-shot training episodes induced by Ctrain, and θ represents the model’s parameters. This training regime is often characterized as meta-learning or learning to learn, i.e. learning over many episodes how to learn within an episode (from few labeled examples). Episodic models differ by their “inference
algorithm”, i.e. the manner in which pθ(y∗ | x∗,S) is computed to classify query examples based on the support set.
Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodic model which constructs a prototype φc for each class c in an episode as
φc = 1 |Sc| ∑ x∈Sc fθ(x), (2)
where f is an embedding function parametrized by θ and Sc represents the set of support examples belonging to class c, and classifies a given query example as
p(y∗ = c | x∗,S) = exp(−||x ∗ − φc||22)∑
c′ exp(−||x∗ − φc′ ||22) . (3)
3 SHOT CONDITIONAL EPISODIC (SCONE ) TRAINING
In this section we introduce Shot CONditional Episodic (SCONE ) training for the purpose of specializing a strong pre-trained model to solving few-shot classification tasks of a range of different shots, without suffering disproportionately for any shot.
Training objective Training episodically involves minimizing the objective shown in Equation 1. We first sample an episode from P k,Ntrain and compute a prediction pθ(y
∗ | x∗,S) for each query example x∗. We then compute the cross-entropy loss on the query set using those predictions and perform a parameter update by backpropagating its gradient with respect to θ into the inference algorithm. In this work we concern ourselves with models that use an embedding function fθ to obtain a representation for the support and query examples of each episode on top of which the inference algorithm is applied. In Prototypical Networks, for instance, fθ contains all of the model’s learnable parameters.
SCONE trains on episodes of varying shots and conditions the model on each episode’s shot distribution. (Figure 1) by minimizing
Ek∼Pk ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθk(y∗ | x∗,S) , (4) where Pk is the distribution over shots at training time and θk depends on an episode’s sampled shots. In the Appendix, we include an algorithm box outlining SCONE fine-tuning.
Conditioning mechanism Rather than learning a separate set of model parameters for each shot setting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioning mechanism which performs an affine feature-wise transformation of its input x based on conditioning information k (in our case, the episode’s number of shots):
FiLM(x) = γ(k) x+ β(k). (5)
The dependency of γ and β on k is handled by maintaining distinct values for each shot setting and selecting the appropriate γ and β based on an episode’s shot. Equivalently, we can think of our approach as a compact representation of many shot-specific feature extractors which share all but their FiLM layer parameters.
More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX-SHOT] range (where MAX-SHOT is a hyperparameter) and let all shots settings greater than or equal to MAXSHOT share the same FiLM parametrization. As is often the case in practice, instead of inserting FiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values of existing batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performing episodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training (i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLM parameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial to penalize the L2-norm of β (regularizing the offset towards 0) and the L2 norm of γ − 1 (regularizing the scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength of this FiLM weight decay.
Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, where different classes have different shots. In that case, instead of selecting a single set of FiLM parameters, we compute the FiLM parameters for an episode as the convex combination of the FiLM parameters associated with all shots found in the episode, where the weights of that combination are determined based on the frequency with which each shot appears in the episode.
Concretely, the episode’s “shot distribution” s (a vector of length MAX-SHOT) is obtained by averaging the one-hot representations of the shots of the classes appearing in an episode. In the special case of a class-balanced episode, the resulting average will be exactly a one-hot vector. This shot distribution is then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as an embedding lookup sTF in a matrix F of FiLM parameters using a shot distribution s.
Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters, which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTHSHOT procedure in the Appendix in Algorithm 1, which receives the shot s of a class (an integer), and a smoothing hyperparameter m (a float in [0, 1]) and returns the smoothed shot for that class, which is a vector of length MAX-SHOT. Essentially, the result of smoothing is that the returned vector representation of s is not strictly one-hot with only the position corresponding to the observed shot s being ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entries that are directly adjacent to s receive the value m, the entries two spots away from s the value m2, and so on, with entries further away from s receiving exponentially-decaying values.
4 RELATED WORK
Few-shot classification A plethora of models have been recently proposed for few-shot classification, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodic training was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015; Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is made via nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NN algorithm where the label of a query example is predicted to be the weighted average of the (one-hot) support labels with the weights determined by the similarity of that query to each support example.
Gradient-based episodic models are another popular family of approaches following the influential MAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunes the embedding weights along with a linear classifier head using gradient descent on the support set.
Intuitively, this results in learning an embedding space that serves as a useful starting point from which a few steps of gradient descent suffice to adapt the model to each episode’s classification task. Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier for each episode from the prototypes of the classes appearing in that episode.
Recently, the field has shifted towards studying few-shot classification in more realistic environments like tiered-ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which has encouraged research into newly-introduced challenges, such as accounting for multiple diverse datasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel task conditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach, and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each test episode out of a universal feature representation.
Understanding episodic learning Our work inscribes itself in a recent line of work attempting to understand the differences between episodic and non-episodic learning. Goldblum et al. (2020) attempts to understand episodic learning from the perspective of how classes cluster in feature-space (for models that learn a final classification layer on top of a feature extractor) as well as from the perspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chao et al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskill et al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use in non-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in their ability to generalize to new examples of previously seen classes or new examples of unseen classes. Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks to explain the observed performance drop when there is a mismatch between the shots at training and test time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of a pre-trained solution, in a larger-scale and more diverse environment.
Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) are used as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for a survey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the loss at each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on a scalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as a means of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) use FiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) uses it as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has also been used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shot learning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019) and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of the support set for this and thus the ‘shot’ information is discarded. The purpose of our conditioning mechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners is inspired by recent work that investigates loss-conditional training using feature-wise transformations (Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).
5 EXPERIMENTS
5.1 EXPLORING THE ROLE OF ‘SHOTS‘ DURING EPISODIC FINE-TUNING
In this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuning phase, and in particular how it impacts the resulting model’s ability to solve test episodes of different shots. We consider either using a fixed shot k throughout the fine-tuning phase, or fine-tuning on episodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well as SCONE fine-tuning that equips the model with the shot-conditioning mechanism described in the previous section. We also compare against EST (Cao et al., 2020).
Experimental setup We ran this round of experiments on ImageNet using the class splits proposed in Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet. We then removed the topmost classification layer, leaving us with a pre-trained backbone that we used as the initialization for the subsequent episodic fine-tuning round. We ran the following
variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusively on 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from the range [1, 40] (‘Fine-tune on all shots’), and on episodes with that same shot distribution but using SCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shot conditioning mechanism described in the previous section. We also consider ‘Fine-tune on best k-shot’, an additional baseline that fine-tunes exclusively on the shot k that is found to work best on average on the validation set (on the range of shots 1 − 40). For this, we trained models for k = 1, 5, 10, 15, 20, 30, 40 and found the best to be k = 15.
As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm of FiLM parameters. For a fair comparison with the other models, we applied the same regularization to the batch normalization parameters of all models during the episodic fine-tuning phase, and we found this to be generally helpful. We tuned the strength of this regularization separately for each model and picked the variant that worked best on the validation set, which we report in the Appendix. We set the SCONE ’s MAX-SHOT hyperparameter to be 40 for this experiment.
We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for building shot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of the learned embeddings that aims to strike a good balance between maximizing the inter-class variance and minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter ρ. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned ρ and the hyperparameter d controlling the projection dimensionality very extensively. The values that we found worked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantially different than those used in the original EST paper: d = 480 and ρ = 5e − 8 (versus the original d = 120 and ρ = 1e− 3). We believe that this discrepancy may be due to our deeper backbones and larger range of shots. The EST configuration that worked best for us yields a minimal reduction in the embedding dimensionality, and primarily favours maximizing the inter-class variance, with the term that minimizes the intra-class variance having minimal effect.
In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and we perform early stopping and model selection on the validation set of classes, where the validation performance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trained on. All models are tested on a held-out test set of classes that is not seen during pre-training nor episodic fine-tuning, on 5-way episodes of different shot settings.
Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on test episodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses the performance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and 5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that the episodic fine-tuning phase may serve is to specialize the pre-trained model for performing well on tasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specialization comes at the cost of severely reduced performance on tasks of very different shots. For instance, the model that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shot test tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice to perform well in all settings either, since k = 15 there, it performs really poorly on 1-shot for instance.
In practice, it may be desirable to perform well on more than a single shot setting at test time, without having to fine-tune multiple separate shot-specialized models. A reasonable approach to that is
episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that ‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in any given setting, it falls short of the performance of the corresponding shot-specialized model.
Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings (‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONE fine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via its conditioning mechanism, without suffering the trade-offs inherent in naively specializing a model exclusively to any particular shot. We can view a SCONE model as a very compact way of representing multiple shot-specialized models, where the information required for that specialization resides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in this setting which also strives for shot resiliency, but does so by encouraging invariance to the shot setting rather than shot awareness as in SCONE .
5.2 LARGE-SCALE EXPERIMENTS ON META-DATASET
In what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark for few-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct image datasets, including natural images, handwritten characters and sketches. It also defines a generative process for episodes that varies the way and shot across episodes, and within a particular episode varies the shot for different classes, introducing imbalance. The range of shots induced by this episode generator is also larger than what we considered in the previous section. It is a long-tailed distribution under which small and medium shots are more likely but it is possible to also encounter very large shots (e.g. >400), though this would happen very infrequently. We include histograms of the shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. These experiments aim to investigate whether SCONE is effective on this broader shot distribution and imbalanced episodes.
Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we explore different strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights using Prototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodes of varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’) to SCONE episodic fine-tuning (‘SCONE ’). Since SCONE uses L2-regularization on the sets of FiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning with L2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of our method that does not use any smoothing of the shot distribution. Finally, we compare to EST as well where we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ Prototypical Network variant, since that worked best. We tuned EST’s hyperparameters very extensively, as
described in Section 5.1, this time model-selecting on the validation sets of all datasets of MetaDataset. The values that worked best in this case are d = 480 and ρ = 5e− 9. As noted in Section 5.1, these are substantially different than those used in the original EST paper, likely due to our deeper backbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune on best k-shot’ baseline described in Section 5.1. In this case we found that the best k was 20.
We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets. We set SCONE ’s MAX-SHOT to 200. We tune the learning rate and decay schedule separately for each variant and we perform model selection of SCONE ’s hyperparameters using the validation set. All additional details are reported in the Appendix, and we plan to open source our code upon publication.
Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chen et al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followed by an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training a classifier on the set of training classes. This variant is evaluated on few-shot episodes by discarding the ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inference algorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trained embeddings on the episodic objective of the aforementioned nearest-centroid algorithm.
When training on all datasets of Meta-Dataset, they obtained strong results using their Classifier-Baseline which is in this case trained in a multi-task setup with separate output heads for the different datasets. They found that episodically fine-tuning that solution on all datasets did not help in general (it improved performance on some datasets but hurt performance on a larger number of datasets).
Inspired by that finding, we experimented with a SCONE training phase on top of Classifier-Baseline’s strong pre-trained solution where we froze the embedding weights to that powerful representation and we optimized only the set of SCONE ’s FiLM parameters for shot conditioning. We performed this fine-tuning on training episodes from all datasets, using MetaBaseline’s nearest centroid method as the episodic model. As a control experiment, we performed the same episodic fine-tuning but without shot-conditioning, where we optimized only the batch normalization parameters, keeping the remainder of the
embedding weights frozen (‘Control’). This control can be thought of as a special case of SCONE where MAX-SHOT is set to 1.
Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their more heavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computation). Following (Triantafillou et al., 2020), we run a test of statistical significance described in the Appendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperforms standard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing the L2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when not using SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablation of SCONE that does not use any smoothing of the shot distribution is also competitive, but performs worse than full SCONE . We also observe that EST is competitive in this setting, only slightly worse than SCONE , though we note that SCONE is a more general approach that is not tied to Gaussian classifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuning the batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), but using SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in this setting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness on different shot distributions, and in different backbones.
FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projection (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). As expected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective of the fact that they rely on similar features for classification.
Example smoothed shot distribution To gain an intuition on the effect of our smoothing procedure, we illustrate in Figure 4 the result of smoothing an example shot distribution using m = 1 − 1e − 06, which is the value of the smoothing hyperparameter that we used for our Prototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-way episode where the shots for the four classes are: 1, 10, 23, and 103. We observe that the largest peak is in the range of small values, due to the first three shots of the episode, with the fourth shot causing a second peak around the value 103. As a reminder, this shot distribution defines the weights of the convex combination of FiLM parameters that will be used for the episode. In practice therefore, we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictly activating only the FiLM parameters of the observed shots.
6 CONCLUSION
In summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of a pre-trained model for few-shot classification. We discover that this fine-tuning phase can be used to specialize the pre-trained model to episodes of a given shot, leading to strong performance on test episodes of that shot at the expense of inferior performance on other shots. To eliminate that trade-off, we propose a shot-conditional episodic training approach that trains a model on episodes of a range of shots and can be conditioned at test time to modify its behavior appropriately depending on the shot of the given test episode. Our experimental analysis suggests that our proposed shotconditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scale and diverse Meta-Dataset benchmark, in the context of two different episodic models. Future work could explore how to incorporate shot-awareness in other few-shot classification models. In addition to the architectural modification of FiLM conditioning on the shot distribution, are there algorithmic adjustments that can yield additional performance gains, such as a mechanism of determining the number of inner-loop updates to perform for gradient-based meta-learners based on the number of available shots?
A APPENDIX
SCONE ’S TRAINING ALGORITHM IN MORE DETAIL
For clarity, we provide pseudocode for SCONE ’s training algorithm including our procedure for shot smoothing in Algorithm 1. We will also release our code upon publication for reproducibility.
Algorithm 1 SCONE training Input: Distributions of training episodes Ptrain, pre-trained embedding weights θ, pre-trained batch
norm weights γ and β, embedding function f , learning rate (a float), smoothing co-efficient m (a float in the range [0, 1]) and maximum supported shot MAX-SHOT (an int).
Output: Finetuned embedding weights θ′ and FiLM parameters F = {γ′, β′}.
procedure SMOOTH-SHOT(s,m,MAX-SHOT) if s > MAX-SHOT then
s← MAX-SHOT . Cap s to the max supported shot end if s← s− 1 . So that s is in the range [0,MAX-SHOT − 1] s̃← ONE-HOT(s, DEPTH=MAX-SHOT) . Init the smoothed shot for 0 ≤ j ≤ MAX-SHOT do
l← s− j − 1 . The index j slots to the left of s l← ONE-HOT(l, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if l < 0 r ← s+ j + 1 . The index j slots to the right of s r ← ONE-HOT(r, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if r < 0 s̃← s̃+ l + r m← m2 . Adjust the next iteration’s smoothing
end for end procedure
θ′ ← θ . Init the embedding weights from the pre-trained embeddings for 1 ≤ k ≤ MAX-SHOT do . Init the FiLM params from the pre-trained batch norm params
γ′(k)← γ β′(k)← β
end for while validation accuracy is improving do
Sample a training episode with support set S and query set Q Let k1, . . . kN be the shots of the episode’s classes. s← ZEROS(MAX-SHOT) . Init the (unnormalized) shot distribution for each class i do
si ← ONE-HOT(ki, DEPTH = MAX-SHOT) si ← SMOOTH-SHOT(si,m,MAX-SHOT) . Smooth the one-hot shot of class i s← s+ si end for s← s÷ SUM(s) . Normalize to get the episode’s shot distribution γ′s ← sT γ′ . Select the FiLM params for the episode β′s ← sTβ′ Let SH = {f(x; θ′, γ′s, β′s), y}(x,y)∈S . The embedded support set Let QH = {f(x; θ′, γ′s, β′s), y}(x,y)∈Q . The embedded query set L ← 1|QH | ∑ (h∗,y∗)∈QH − log p(y∗ | h∗,SH) . Compute the episode’s loss θ′ ← θ′ − ∂L ∂θ′ . Update the model via gradient descent γ′ ← γ′ − ∂L ∂γ′ β′ ← β′ − ∂L ∂β′
end while
Hypothesis testing We follow the same procedure as in (Triantafillou et al., 2020) to compute ranks for different methods that in turn determine which entries to bold in our tables. Specifically, we perform a 95% confidence interval statistical test on the difference between the mean accuracies of pairs of entries of each row. If for two entries we are not able to reject the null hypothesis that the difference between their means is 0, they will receive the same rank. For example, if model A and model B are tied for the first place according to that test, they will each receive the rank 1.5 (the average of the ranks 1 and 2). If we are able to reject that hypothesis, however, the entry with the larger mean accuracy will receive a higher rank than the other. In each row, we bold the entries that are tied for the highest rank. For convenience, in the last section of the Appendix, we show a more heavily-annotated copy of each table in the paper to make the rank computation procedure more transparent.
ADDITIONAL RESULTS
First, we provide in Figure 5 additional plots to cover more evaluation shot settings than those shown in Figure 2 in the main paper. The setup for this is exactly the same as for Figure 2.
Next, since we observe that the ‘Best k-shot’ baseline performs well in Table 1, which reflects average performance across episodes of varying shots, we further break down its performance different ranges of shots in Figure 6. We find that while indeed ‘Best k-shot’ performs well for large shots, it actually performs poorly for low shots. This finding strengthens our case against this baseline: not only is it computationally expensive, requiring training multiple different models to pick the one that performs best, but is also is not as consistent as SCONE in its performance on different shot ranges.
Finally, to place our results into context, we display the results of Meta-Baseline with SCONE alongside the performance of recent work, controlling for model capacity, in Table 3. The approaches we compare against are: Classifier-Baseline (Chen et al., 2020), SUR-pf (Dvornik et al., 2020), TaskNorm (Bronskill et al., 2020) and Simple CNAPs (Bateni et al., 2020). In particular, we report the performance of the parametric family of SUR (‘SUR-pf’) instead of full SUR (which has 8x more parameters), in order to make apples-to-apples comparisons with the remaining approaches. We find that the Meta-Baseline method, when combined with SCONE, achieves state-of-the-art on Meta-Dataset in this context, according to the average rank metric.
EXPERIMENTAL DETAILS
We plan to open source our code upon publication, including all experimental details. In the meantime, we outline these details below for completeness.
Architecture We use ResNet-18 as the feature extractor for all of our experiments, following the implementation in (Triantafillou et al., 2020). For the SCONE variants, we add FiLM to all of the batch normalization layers throughout the network.
Image processing For all experiments, we use Meta-Dataset’s input pipeline to obtain images, and we follow the image processing performed in (Chen et al., 2020) which yields images of size 128 x 128. We apply standard data augmentation consisting of horizontal flipping and random cropping followed by standardization using a commonly-used mean and standard deviation as in (Chen et al., 2020). For episodic models, data augmentation is applied in both the support and query sets. No data augmentation is used at validation nor test time.
Optimization We use ADAM with exponential learning rate decay and weight decay of 1e− 8 to optimize all models in this work. We tune the initial learning rate, the decay rate, and the number of updates between each learning rate decay separately for each model presented in the paper. The initial learning rate values we considered are 0.0005 and 0.001, with a decay factor of 0.8 or 0.9 applied every 1000, 2000, 3000 steps. We ran a variant for every combination of those values. We also tune the weight decay applied to the FiLM parameters (for SCONE variants) or the batch normalization parameters (for non-SCONE variants). We tried the values: 1e− 8, 1e− 6, 1e− 4.
SCONE hyperparameters For the SCONE variants, aside from the above hyperparameters, we additionally tune the smoothing parameter m described in the main paper that is used for training and for evaluation. We did not tune the MAX-SHOT hyperparameter mentioned in the main paper as we found that our initial choices worked reasonably. Specifically, we set it to 40 for the smaller-scale experiments where the maximum shot was 40, and to 200 for the large-scale experiments. The latter choice was performed heuristically since shots much larger than 200 are unlikely under the shot distribution induced by Meta-Dataset’s episode generator. For more information on that shot distribution, we refer the reader to the next section.
SCONE smoothing hyperparameter We tuned the value of this hyperparameter that will be used both at training and at evaluation. At training time, we considered values in the range 0, 0.2, 0.4, 0.6, 0.9 for Prototypical Network experiments, and we picked the variant that worked best according to the validation performance that was computed without smoothing. Once the model was trained and all of the remaining hyperparamters were tuned, we performed a final validation round to tune the evaluation-time smoothing that will be used in the chosen model. We found it beneficial to use larger values here, picking the value of 1− 1e− 06 for example for the Prototypical Network on ImageNet. In the Meta-Baseline codebase, we trained with larger values of smoothing (the best we found was 1− 1e− 10) and didn’t find it beneficial to additionally smooth at evaluation time.
Model selection For each experiment, we perform early stopping according to the performance on the validation set. For the models that train on a single shot k in the smaller-scale experiments, the validation performance that we monitor for early stopping is the average query set accuracy on k-shot 5-way episodes drawn from the validation set. For the models in the small-scale experiments that train on a distribution of shots, we use the average validation performance over 5-way episodes whose shot is sampled according to the same distribution used for training the respective model. For the larger-scale Meta-Dataset experiments, we draw validation episodes only from the validation set of ImageNet for the experiments that train on ImageNet only, or from the validation sets of all datasets for the experiments that train on all datasets. In both cases, the validation episodes are drawn using Meta-Dataset’s episode generator that yields episodes of variable ways and variable shots with class imbalance. In all cases, the average validation performance is computed over 600 validation episodes and is monitored every 2K training updates. We apply exponential smoothing to the resulting validation "curve" (using the default value of 0.6 in TensorBoard). Then, we choose the update step at which the highest peak of that curve is found and we use the checkpoint corresponding to that update step for testing.
DISTRIBUTION OF SHOTS IN META-DATASET EPISODES
For reference, Figure 7 displays histograms of the number of shots produced by Meta-Dataset’s episode sampling algorithm. These are computed by sampling 600 episodes per dataset for each of the training, validation and test splits of Meta-Dataset.
TABLES WITH MORE DETAILED RANKS
In this section, we include a copy of the same tables appearing previously in the paper, but additionally annotated with per-row ranks, to make the rank computation method more transparent. These tables are Table 4, Table 5 and Table 6. | 1. What is the focus and contribution of the paper on few-shot learning?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. What are the weaknesses of the paper, especially regarding the smoothing of the shot distribution and the lack of ablation studies?
4. Do you have any concerns about the method's performance, such as the possibility of finding a single optimal K-shot that works well across different scenarios?
5. What are your suggestions for improving the paper's evaluations and comparisons with other works?
6. Can you explain the UMAP projection and provide more context for understanding Figures 3 and 4?
7. Are there any minor issues or inconsistencies in the presentation of the paper's content? | Review | Review
Summary: This paper proposes to use a shot-conditioned model that specializes in pre-trained few-shot learning model to a wide spectrum of shots. The proposed approach is simple but effective, it trains neural networks that conditioned on a support set of a different number of shots K during the episodic fine-tuning stage, with FiLM as the conditioning mechanism.
Pros:
This paper is clearly written, well-motivated, and thoroughly evaluated. It is an enjoyable reading experience .
The proposed model and training algorithm is simple but effective
Experiments are well designed, which first verifies that shot-conditioned few-shot learners can achieve relatively good performances on different Ks and then perform the large-scale evaluation.
Cons:
Smoothing the shot distribution section is overly simplified. It is very hard to understand algorithm 1 given the limited details presented in the method section. I would suggest explaining in detail the smooth-shot procedure in the paper.
Lack of ablation studies. There are two key designs of the few-shot learner presented in this paper, that does not have detailed ablation study results. The first is whether using the convex combination of FiLM parameters to obtain shot distribution s. The second is whether to use smoothing in shot distribution. How much are these two designs contribute to the model's final performances?
In figure 2, it seems that the method trained with 5-shot has very good performances in all three scenarios presented? Would it be possible that such a good K-shot can be always found and therefore we do not necessarily need this K-conditioned few-shot learner? For instance, just always train with 5-shot and evaluate different shots.
Instead of looking at the results of some instances of different K, it would be nice to have a comprehensive evaluation over different Ks. For instance, evaluating over all Ks from 1 to 40 and compute the average accuracy of each different model. To reduce the computation overhead, we can also try to evaluate Ks in {1, 10, 20, 30, 40}.
Comparison of state-of-the-art methods for meta-dataset experiments.
What is UMAP projection? The content of the experiments is not self-contained. How shall we read Figure 3?
Figure 4 is also not straightforward to understand. What is the value on the y-axis? What does it mean?
Minor:
Inconsistent formats for average performances in Table 1. It seems that the results of the first two columns is badly formatted? |
ICLR | Title
Learning Flexible Classifiers with Shot-CONditional Episodic (SCONE) Training
Abstract
Early few-shot classification work advocates for episodic training, i.e. training over learning episodes each posing a few-shot classification task. However, the role of this training regime remains poorly understood, and its usefulness is still debated. Standard classification training methods (“pre-training”) followed by episodic finetuning have recently achieved strong results. This work aims to understand the role of this episodic fine-tuning phase through an exploration of the effect of the “shot” setting (number of examples per class) that is used during fine-tuning. We discover that fine-tuning on episodes of a particular shot can specialize the pre-trained model to solving episodes of that shot at the expense of performance on other shots, in agreement with a trade-off recently observed in the context of end-to-end episodic training. To amend this, we propose a shot-conditional form of episodic fine-tuning, inspired from recent work that trains a single model on a distribution of losses. Our investigation shows that this improves overall performance, without suffering disproportionately on any shot. We also examine the usefulness of this approach on the large-scale Meta-Dataset benchmark where test episodes exhibit varying shots and imbalanced classes. We find that our flexible model improves performance in that challenging environment.
1 INTRODUCTION
Few-shot classification is the problem of learning a classifier using only a few examples. Specifically, the aim is to utilize a training dataset towards obtaining a flexible model that has the ability to ‘quickly’ learn about new classes from few examples. Success is evaluated on a number of test episodes, each posing a classification task between previously-unseen test classes. In each such episode, we are given a few examples, or “shots”, of each new class that can be used to adapt this model to the task at hand, and the objective is to correctly classify a held-out set of examples of the new classes.
A simple approach to this problem is to learn a classifier over the training classes, parameterized as a neural network feature extractor followed by a classification layer. While the classification layer is not useful at test time due to the class shift, the embedding weights that are learned during this “pre-training” phase evidently constitute a strong representation that can be used to tackle test tasks when paired with a simple “inference algorithm” (e.g. nearest-neighbour, logistic regression) to make predictions for each example in the test episode given the episode’s small training set. Alternatively, early influential works on few-shot classification (Vinyals et al., 2016) advocate for episodic training, a regime where the training objective is expressed in terms of performance on a number of training episodes of the same structure as the test episodes, but with the classes sampled from the training set. It was hypothesized that this episodic approach captures a more appropriate inductive bias for the problem of few-shot classification and would thus lead to better generalization.
However, there is an ongoing debate about whether episodic training is in fact required for obtaining the best few-shot classification performance. Notably, recent work (Chen et al., 2019; Dhillon et al., 2020) proposed strong “pre-training” baselines that leverage common best practices for supervised training (e.g. normalization schemes, data augmentation) to obtain a powerful representation that works well for this task. Interestingly, other recent work combines the pre-training of a single classifier with episodic fine-tuning by removing the classification head and continuing to train the embedding network using the episodic inference algorithm that will be applied at test time (Triantafillou et al., 2020; Chen et al., 2020). The success of this hybrid approach suggests that perhaps the two regimes
have complementary strengths, but the role of this episodic fine-tuning is poorly understood: what is the nature of the modification it induces into the pre-trained solution? Under which conditions is it required in order to achieve the best performance?
As a step towards answering those questions, we investigate the effect of the shot used during episodic fine-tuning on the resulting model’s performance on test tasks of a range of shots. We are particularly interested in understanding whether the shot of the training episodes constitutes a source of information that the model can leverage to improve its few-shot classification performance on episodes of that shot at test time. Our analysis reveals that indeed a particular functionality that this fine-tuning phase may serve is to specialize a pre-trained model to solving tasks of a particular shot; accomplished by performing the fine-tuning on episodes of that shot. However, perhaps unsurprisingly, we find that specializing to a given shot comes at the expense of hurting performance for other shots, in agreement with (Cao et al., 2020)’s theoretical finding in the context of Prototypical Networks (Snell et al., 2017) where inferior performance was reported when the shot at training time did not match the shot at test time.
Given those trade-offs, how can our newfound understanding of episodic fine-tuning as shotspecialization help us in practice? It is unrealistic to assume that we will always have the same number of labeled examples for every new class we hope to learn at test time, so we are interested in approaches that operate well on tasks of a range of shots. However, it is impractical to fine-tune a separate episodic model for every shot, and intuitively that seems wasteful as we expect that tasks of similar shots should require similar models. Motivated by this, we propose to train a single shot-conditional model for specializing the pre-trained solution to a wide spectrum of shots without suffering trade-offs. This leads to a compact but flexible model that can be conditioned to be made appropriate for the shot appearing in each test episode.
In what follows we provide some background on few-shot classification and episodic models and then introduce our proposed shot-conditioning approach and related work. We then present our experimental analysis on the effect of the shot chosen for episodic fine-tuning, and we observe that our shot-conditional training approach is beneficial for obtaining a general flexible model that does not suffer the trade-offs inherent in naively specializing to any particular shot. Finally, we experiment with our proposed shot-conditional approach in the large-scale Meta-Dataset benchmark for few-shot classification, and demonstrate its effectiveness in that challenging environment.
2 BACKGROUND
Problem definition Few-shot classification aims to classify test examples of unseen classes from a small labeled training set. The standard evaluation procedure involves sampling classification episodes by picking N classes at random from a test set of classes Ctest and sampling two disjoint sets of examples from the N chosen classes: a support set (or training set) of k labeled examples per class, and a query set (or test set) of unlabeled examples, forming N -way, k-shot episodes. The model is allowed to use the support set, in addition to knowledge acquired while training on a disjoint set of classes Ctrain, to make a prediction for examples in the query set, and is evaluated on its query set accuracy averaged over multiple test episodes.
Episodic training Early few-shot classification approaches (Vinyals et al., 2016) operate under the assumption that obtaining a model capable of few-shot classification requires training it on (mini-batches of) learning episodes, instead of (mini-batches of) individual examples as in standard supervised learning. These learning episodes are sampled in the same way as described above for test episodes, but with classes sampled from Ctrain this time. In other words, the model is trained to minimize a loss of the form:
ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθ(y∗ | x∗,S) (1) where S and Q are support and query sets sampled from the distribution PN,ktrain of N -way, k-shot training episodes induced by Ctrain, and θ represents the model’s parameters. This training regime is often characterized as meta-learning or learning to learn, i.e. learning over many episodes how to learn within an episode (from few labeled examples). Episodic models differ by their “inference
algorithm”, i.e. the manner in which pθ(y∗ | x∗,S) is computed to classify query examples based on the support set.
Prototypical Networks Prototypical Networks (Snell et al., 2017) is a simple but effective episodic model which constructs a prototype φc for each class c in an episode as
φc = 1 |Sc| ∑ x∈Sc fθ(x), (2)
where f is an embedding function parametrized by θ and Sc represents the set of support examples belonging to class c, and classifies a given query example as
p(y∗ = c | x∗,S) = exp(−||x ∗ − φc||22)∑
c′ exp(−||x∗ − φc′ ||22) . (3)
3 SHOT CONDITIONAL EPISODIC (SCONE ) TRAINING
In this section we introduce Shot CONditional Episodic (SCONE ) training for the purpose of specializing a strong pre-trained model to solving few-shot classification tasks of a range of different shots, without suffering disproportionately for any shot.
Training objective Training episodically involves minimizing the objective shown in Equation 1. We first sample an episode from P k,Ntrain and compute a prediction pθ(y
∗ | x∗,S) for each query example x∗. We then compute the cross-entropy loss on the query set using those predictions and perform a parameter update by backpropagating its gradient with respect to θ into the inference algorithm. In this work we concern ourselves with models that use an embedding function fθ to obtain a representation for the support and query examples of each episode on top of which the inference algorithm is applied. In Prototypical Networks, for instance, fθ contains all of the model’s learnable parameters.
SCONE trains on episodes of varying shots and conditions the model on each episode’s shot distribution. (Figure 1) by minimizing
Ek∼Pk ES,Q∼PN,ktrain 1 |Q| ∑ (x∗,y∗)∈Q − log pθk(y∗ | x∗,S) , (4) where Pk is the distribution over shots at training time and θk depends on an episode’s sampled shots. In the Appendix, we include an algorithm box outlining SCONE fine-tuning.
Conditioning mechanism Rather than learning a separate set of model parameters for each shot setting, we modulate a subset of its parameters using FiLM (Perez et al., 2018), a simple conditioning mechanism which performs an affine feature-wise transformation of its input x based on conditioning information k (in our case, the episode’s number of shots):
FiLM(x) = γ(k) x+ β(k). (5)
The dependency of γ and β on k is handled by maintaining distinct values for each shot setting and selecting the appropriate γ and β based on an episode’s shot. Equivalently, we can think of our approach as a compact representation of many shot-specific feature extractors which share all but their FiLM layer parameters.
More concretely, we maintain a set of FiLM parameters for each shot in the [1, MAX-SHOT] range (where MAX-SHOT is a hyperparameter) and let all shots settings greater than or equal to MAXSHOT share the same FiLM parametrization. As is often the case in practice, instead of inserting FiLM layers in the network’s architecture, we modulate the scaling and shifting parameter values of existing batch normalization layers (Dumoulin et al., 2017; De Vries et al., 2017). When performing episodic fine-tuning, we initialize all sets of FiLM parameters to those learned during pre-training (i.e. the learned batch normalization scaling and shifting coefficients). These different sets of FiLM parameters are then free to deviate from each other as a result of fine-tuning. We found it beneficial to penalize the L2-norm of β (regularizing the offset towards 0) and the L2 norm of γ − 1 (regularizing the scaling towards 1). For this purpose, we introduce a hyperparameter that controls the strength of this FiLM weight decay.
Handling class-imbalanced episodes SCONE can also be used on imbalanced episodes, where different classes have different shots. In that case, instead of selecting a single set of FiLM parameters, we compute the FiLM parameters for an episode as the convex combination of the FiLM parameters associated with all shots found in the episode, where the weights of that combination are determined based on the frequency with which each shot appears in the episode.
Concretely, the episode’s “shot distribution” s (a vector of length MAX-SHOT) is obtained by averaging the one-hot representations of the shots of the classes appearing in an episode. In the special case of a class-balanced episode, the resulting average will be exactly a one-hot vector. This shot distribution is then used for the purpose of selecting the episode’s FiLM parameters. This can be thought of as an embedding lookup sTF in a matrix F of FiLM parameters using a shot distribution s.
Smoothing the shot distribution We expect similar shot values to require similar FiLM parameters, which we incorporate as an inductive bias by smoothing the shot distribution. We outline our SMOOTHSHOT procedure in the Appendix in Algorithm 1, which receives the shot s of a class (an integer), and a smoothing hyperparameter m (a float in [0, 1]) and returns the smoothed shot for that class, which is a vector of length MAX-SHOT. Essentially, the result of smoothing is that the returned vector representation of s is not strictly one-hot with only the position corresponding to the observed shot s being ‘on’. Instead, some entries surrounding that position are also non-zero. Specifically, the entries that are directly adjacent to s receive the value m, the entries two spots away from s the value m2, and so on, with entries further away from s receiving exponentially-decaying values.
4 RELATED WORK
Few-shot classification A plethora of models have been recently proposed for few-shot classification, and we refer the reader to (Hospedales et al., 2020) for a broad survey. Before episodic training was introduced, few-shot classifiers often relied on metric learning (Koch et al., 2015; Triantafillou et al., 2017). This theme persisted in early episodic models like Matching Networks (Vinyals et al., 2016) and Prototypical Networks (Snell et al., 2017) where classification is made via nearest-neighbour comparisons in the embedding space. Matching Networks apply a soft k-NN algorithm where the label of a query example is predicted to be the weighted average of the (one-hot) support labels with the weights determined by the similarity of that query to each support example.
Gradient-based episodic models are another popular family of approaches following the influential MAML paper (Finn et al., 2017). To create a classifier for each given episode, this approach fine-tunes the embedding weights along with a linear classifier head using gradient descent on the support set.
Intuitively, this results in learning an embedding space that serves as a useful starting point from which a few steps of gradient descent suffice to adapt the model to each episode’s classification task. Proto-MAML (Triantafillou et al., 2020) is a simple extension that initializes the linear classifier for each episode from the prototypes of the classes appearing in that episode.
Recently, the field has shifted towards studying few-shot classification in more realistic environments like tiered-ImageNet (Ren et al., 2018) and Meta-Dataset (Triantafillou et al., 2020), which has encouraged research into newly-introduced challenges, such as accounting for multiple diverse datasets. Along these lines, Requeima et al. (2019); Bateni et al. (2019) proposed novel task conditioning approaches, Saikia et al. (2020) introduced an improved hyperparameter tuning approach, and Dvornik et al. (2020) proposed a method for selecting an appropriate set of features for each test episode out of a universal feature representation.
Understanding episodic learning Our work inscribes itself in a recent line of work attempting to understand the differences between episodic and non-episodic learning. Goldblum et al. (2020) attempts to understand episodic learning from the perspective of how classes cluster in feature-space (for models that learn a final classification layer on top of a feature extractor) as well as from the perspective of local minima clusters (for gradient-based meta-learners). Huang et al. (2020); Chao et al. (2020) draw parallels between learning episodes and supervised learning examples, Bronskill et al. (2020) discusses batch normalization in episodic learning, drawing parallels from its use in non-episodic learning and Chen et al. (2020) contrasts episodic and non-episodic learning in their ability to generalize to new examples of previously seen classes or new examples of unseen classes. Finally, Cao et al. (2020) theoretically investigates the role of the shot in Prototypical Networks to explain the observed performance drop when there is a mismatch between the shots at training and test time. Instead, we empirically study the effect of the shot chosen during episodic fine-tuning of a pre-trained solution, in a larger-scale and more diverse environment.
Feature-wise conditioning Feature-wise transformations such as FiLM (Perez et al., 2018) are used as a conditioning mechanism in a variety of problem settings; see Dumoulin et al. (2018) for a survey on the topic. (Shu et al., 2019) devise a loss re-weighting scheme that conditions on the loss at each time-step, which is a scalar, thus bearing similarity to our approach when conditioning on a scalar shot setting. In few-shot classification, (Sun et al., 2019) use feature-wise transformations as a means of transfer to new tasks. (Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019) use FiLM to condition metric learners’ backbones on the support set, while (Dvornik et al., 2020) uses it as a way to represent many pre-trained classifiers using a shared parametrization. FiLM has also been used successfully for class-incremental learning Liu et al. (2020) and semi-supervised few-shot learning (Li et al., 2019). Notably, TADAM (Oreshkin et al., 2018), CNAPs (Requeima et al., 2019) and Simple-CNAPs (Bateni et al., 2019) also use task conditioning, but they use the mean of the support set for this and thus the ‘shot’ information is discarded. The purpose of our conditioning mechanism is instead to make the backbone shot-aware. The idea of shot-conditional learners is inspired by recent work that investigates loss-conditional training using feature-wise transformations (Dosovitskiy & Djolonga, 2020; Babaeizadeh & Ghiasi, 2020).
5 EXPERIMENTS
5.1 EXPLORING THE ROLE OF ‘SHOTS‘ DURING EPISODIC FINE-TUNING
In this subsection, we examine the effect of the ‘shot’ that is used during the episodic fine-tuning phase, and in particular how it impacts the resulting model’s ability to solve test episodes of different shots. We consider either using a fixed shot k throughout the fine-tuning phase, or fine-tuning on episodes of a distribution of shots. In the latter case, we explore both standard fine-tuning as well as SCONE fine-tuning that equips the model with the shot-conditioning mechanism described in the previous section. We also compare against EST (Cao et al., 2020).
Experimental setup We ran this round of experiments on ImageNet using the class splits proposed in Meta-Dataset. First, we pre-trained a standard classifier on the set of training classes of ImageNet. We then removed the topmost classification layer, leaving us with a pre-trained backbone that we used as the initialization for the subsequent episodic fine-tuning round. We ran the following
variants of episodic fine-tuning: exclusively on 1-shot episodes (‘Fine-tune on 1-shot’), exclusively on 5-shot episodes (‘Fine-tune on 5-shot’), on episodes whose shot is drawn uniformly from the range [1, 40] (‘Fine-tune on all shots’), and on episodes with that same shot distribution but using SCONE (‘SCONE Fine-tune on all shots’), which additionally equips the backbone with the shot conditioning mechanism described in the previous section. We also consider ‘Fine-tune on best k-shot’, an additional baseline that fine-tunes exclusively on the shot k that is found to work best on average on the validation set (on the range of shots 1 − 40). For this, we trained models for k = 1, 5, 10, 15, 20, 30, 40 and found the best to be k = 15.
As mentioned in the previous section, when applying SCONE training, we penalize the L2 norm of FiLM parameters. For a fair comparison with the other models, we applied the same regularization to the batch normalization parameters of all models during the episodic fine-tuning phase, and we found this to be generally helpful. We tuned the strength of this regularization separately for each model and picked the variant that worked best on the validation set, which we report in the Appendix. We set the SCONE ’s MAX-SHOT hyperparameter to be 40 for this experiment.
We also compare to EST (Cao et al., 2020) which is a theoretically-grounded method for building shot resiliency in Gaussian classifiers. This involves applying a linear transformation on top of the learned embeddings that aims to strike a good balance between maximizing the inter-class variance and minimizing the intra-class variance. In practice, that trade-off is controlled via a hyperparameter ρ. We applied EST on top of the embeddings of ‘Fine-tune on all shots’ and we tuned ρ and the hyperparameter d controlling the projection dimensionality very extensively. The values that we found worked best (selected on the validation set of ImageNet on the range of shots 1-40) are substantially different than those used in the original EST paper: d = 480 and ρ = 5e − 8 (versus the original d = 120 and ρ = 1e− 3). We believe that this discrepancy may be due to our deeper backbones and larger range of shots. The EST configuration that worked best for us yields a minimal reduction in the embedding dimensionality, and primarily favours maximizing the inter-class variance, with the term that minimizes the intra-class variance having minimal effect.
In all cases, we fix the ‘way’ to 5. We use Prototypical Networks as the episodic model and we perform early stopping and model selection on the validation set of classes, where the validation performance of a variant is computed on episodes of the same (distribution of) shot(s) that it is trained on. All models are tested on a held-out test set of classes that is not seen during pre-training nor episodic fine-tuning, on 5-way episodes of different shot settings.
Findings We observe from Figure 2 that fine-tuning on a fixed shot yields the best results on test episodes of that shot. For example, 1-shot accuracies show that ‘Fine-tune on 1-shot’ surpasses the performance of all other variants on 1-shot test episodes, with the analogous findings in 1-shot and 5-shot accuracies for 5-shot and 40-shot, respectively. Therefore, a particular functionality that the episodic fine-tuning phase may serve is to specialize the pre-trained model for performing well on tasks of a particular shot. However, as illustrated in all sub-plots of Figure 2, this shot specialization comes at the cost of severely reduced performance on tasks of very different shots. For instance, the model that is specialized for 40-shot tasks (‘Fine-tune on 40-shot’) performs very poorly on 1-shot test tasks and vice-versa. We also note that the ‘Fine-tune on best k-shot’ model does not suffice to perform well in all settings either, since k = 15 there, it performs really poorly on 1-shot for instance.
In practice, it may be desirable to perform well on more than a single shot setting at test time, without having to fine-tune multiple separate shot-specialized models. A reasonable approach to that is
episodically fine-tuning on a range of shots, to obtain a general model. Indeed, Figure 2 shows that ‘Fine-tune on all shots’ does not perform too poorly on any shot but, perhaps unsurprisingly, in any given setting, it falls short of the performance of the corresponding shot-specialized model.
Finally, we observe that SCONE fine-tuning outperforms its shot-unaware counterpart in all settings (‘SCONE Fine-tune on all shots’ vs ‘Fine-tune on all shots’). This constitutes evidence that SCONE fine-tuning indeed leads to a more flexible model that can adapt to the shot of each episode via its conditioning mechanism, without suffering the trade-offs inherent in naively specializing a model exclusively to any particular shot. We can view a SCONE model as a very compact way of representing multiple shot-specialized models, where the information required for that specialization resides in the light-weight FiLM parameters. SCONE also outperforms the EST approach in this setting which also strives for shot resiliency, but does so by encouraging invariance to the shot setting rather than shot awareness as in SCONE .
5.2 LARGE-SCALE EXPERIMENTS ON META-DATASET
In what follows, we apply SCONE to the diverse and challenging Meta-Dataset benchmark for few-shot classification (Triantafillou et al., 2020). Meta-Dataset is comprised of ten distinct image datasets, including natural images, handwritten characters and sketches. It also defines a generative process for episodes that varies the way and shot across episodes, and within a particular episode varies the shot for different classes, introducing imbalance. The range of shots induced by this episode generator is also larger than what we considered in the previous section. It is a long-tailed distribution under which small and medium shots are more likely but it is possible to also encounter very large shots (e.g. >400), though this would happen very infrequently. We include histograms of the shot distributions of Meta-Dataset’s training, validation and test episodes in the Appendix. These experiments aim to investigate whether SCONE is effective on this broader shot distribution and imbalanced episodes.
Prototypical Network on ImageNet For our first set of experiments on Meta-Dataset, we explore different strategies of episodic fine-tuning of the pre-trained classifier’s embedding weights using Prototypical Networks. For this, we use Meta-Dataset’s sampling algorithm to draw training episodes of varying shots and ways from ImageNet. We compare standard episodic fine-tuning (‘Standard’) to SCONE episodic fine-tuning (‘SCONE ’). Since SCONE uses L2-regularization on the sets of FiLM parameters, for a fair comparison we include a variant of standard episodic fine-tuning with L2-regularization on the batch normalization parameters (‘L2 BN’). We also include an ablation of our method that does not use any smoothing of the shot distribution. Finally, we compare to EST as well where we computed the EST transformation on the ‘L2 BN’ instead of the ‘Standard’ Prototypical Network variant, since that worked best. We tuned EST’s hyperparameters very extensively, as
described in Section 5.1, this time model-selecting on the validation sets of all datasets of MetaDataset. The values that worked best in this case are d = 480 and ρ = 5e− 9. As noted in Section 5.1, these are substantially different than those used in the original EST paper, likely due to our deeper backbones and significantly broader range of shots explored. Finally, we ran the same ‘Fine-tune on best k-shot’ baseline described in Section 5.1. In this case we found that the best k was 20.
We evaluate these models on the held-out set of ImageNet classes as well as the remaining 9 datasets. We set SCONE ’s MAX-SHOT to 200. We tune the learning rate and decay schedule separately for each variant and we perform model selection of SCONE ’s hyperparameters using the validation set. All additional details are reported in the Appendix, and we plan to open source our code upon publication.
Meta-Baseline on all datasets Next, we experiment with the recent Meta-Baseline model (Chen et al., 2020). Meta-Baseline also consists of a pre-training phase (‘Classifier-Baseline’) followed by an episodic fine-tuning phase (‘Meta-Baseline’). Classifier-Baseline refers to simply training a classifier on the set of training classes. This variant is evaluated on few-shot episodes by discarding the ultimate classification layer and utilizing a cosine similarity-based nearest-centroid inference algorithm on the learned embeddings. Meta-Baseline then fine-tunes Classifier-Baseline’s pre-trained embeddings on the episodic objective of the aforementioned nearest-centroid algorithm.
When training on all datasets of Meta-Dataset, they obtained strong results using their Classifier-Baseline which is in this case trained in a multi-task setup with separate output heads for the different datasets. They found that episodically fine-tuning that solution on all datasets did not help in general (it improved performance on some datasets but hurt performance on a larger number of datasets).
Inspired by that finding, we experimented with a SCONE training phase on top of Classifier-Baseline’s strong pre-trained solution where we froze the embedding weights to that powerful representation and we optimized only the set of SCONE ’s FiLM parameters for shot conditioning. We performed this fine-tuning on training episodes from all datasets, using MetaBaseline’s nearest centroid method as the episodic model. As a control experiment, we performed the same episodic fine-tuning but without shot-conditioning, where we optimized only the batch normalization parameters, keeping the remainder of the
embedding weights frozen (‘Control’). This control can be thought of as a special case of SCONE where MAX-SHOT is set to 1.
Findings The results of this investigation are shown in Table 1 and Table 2 (as well as their more heavily-annotated counterparts in the Appendix, Tables 4 and 5, that show the per-row rank computation). Following (Triantafillou et al., 2020), we run a test of statistical significance described in the Appendix to determine when to bold an entry. Table 1 shows that SCONE fine-tuning outperforms standard episodic fine-tuning in the context of Prototypical Networks. Interestingly, penalizing the L2-norm of batch normalization parameters during episodic fine-tuning is beneficial even when not using SCONE , but it does not reach the performance obtained by our shot-conditioning. The ablation of SCONE that does not use any smoothing of the shot distribution is also competitive, but performs worse than full SCONE . We also observe that EST is competitive in this setting, only slightly worse than SCONE , though we note that SCONE is a more general approach that is not tied to Gaussian classifiers. Similarly, in the context of Meta-Baseline, Table 2 shows that episodically fine-tuning the batch normalization parameters of the otherwise-frozen embedding is helpful (‘Control’), but using SCONE to learn a separate set of FiLM parameters for each shot yields additional gains in this setting too. Overall, despite the simplicity of SCONE , these results demonstrate its effectiveness on different shot distributions, and in different backbones.
FiLM parameter visualization Finally, as a sanity check, we perform a UMAP projection (McInnes et al., 2018) of the learned FiLM parameters for each shot setting (Figure 3). As expected, similar shot settings tend to learn similar sets of FiLM parameters, which is reflective of the fact that they rely on similar features for classification.
Example smoothed shot distribution To gain an intuition on the effect of our smoothing procedure, we illustrate in Figure 4 the result of smoothing an example shot distribution using m = 1 − 1e − 06, which is the value of the smoothing hyperparameter that we used for our Prototypical Network experiments on Meta-Dataset. For this, we consider a hypothetical 4-way episode where the shots for the four classes are: 1, 10, 23, and 103. We observe that the largest peak is in the range of small values, due to the first three shots of the episode, with the fourth shot causing a second peak around the value 103. As a reminder, this shot distribution defines the weights of the convex combination of FiLM parameters that will be used for the episode. In practice therefore, we are activating ‘blocks’ of FiLM parameters that are relevant for each episode, instead of strictly activating only the FiLM parameters of the observed shots.
6 CONCLUSION
In summary, we present an analysis aiming to understand the role of episodic fine-tuning on top of a pre-trained model for few-shot classification. We discover that this fine-tuning phase can be used to specialize the pre-trained model to episodes of a given shot, leading to strong performance on test episodes of that shot at the expense of inferior performance on other shots. To eliminate that trade-off, we propose a shot-conditional episodic training approach that trains a model on episodes of a range of shots and can be conditioned at test time to modify its behavior appropriately depending on the shot of the given test episode. Our experimental analysis suggests that our proposed shotconditioning mechanism is beneficial both in smaller-scale experiments, as well as in the large-scale and diverse Meta-Dataset benchmark, in the context of two different episodic models. Future work could explore how to incorporate shot-awareness in other few-shot classification models. In addition to the architectural modification of FiLM conditioning on the shot distribution, are there algorithmic adjustments that can yield additional performance gains, such as a mechanism of determining the number of inner-loop updates to perform for gradient-based meta-learners based on the number of available shots?
A APPENDIX
SCONE ’S TRAINING ALGORITHM IN MORE DETAIL
For clarity, we provide pseudocode for SCONE ’s training algorithm including our procedure for shot smoothing in Algorithm 1. We will also release our code upon publication for reproducibility.
Algorithm 1 SCONE training Input: Distributions of training episodes Ptrain, pre-trained embedding weights θ, pre-trained batch
norm weights γ and β, embedding function f , learning rate (a float), smoothing co-efficient m (a float in the range [0, 1]) and maximum supported shot MAX-SHOT (an int).
Output: Finetuned embedding weights θ′ and FiLM parameters F = {γ′, β′}.
procedure SMOOTH-SHOT(s,m,MAX-SHOT) if s > MAX-SHOT then
s← MAX-SHOT . Cap s to the max supported shot end if s← s− 1 . So that s is in the range [0,MAX-SHOT − 1] s̃← ONE-HOT(s, DEPTH=MAX-SHOT) . Init the smoothed shot for 0 ≤ j ≤ MAX-SHOT do
l← s− j − 1 . The index j slots to the left of s l← ONE-HOT(l, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if l < 0 r ← s+ j + 1 . The index j slots to the right of s r ← ONE-HOT(r, DEPTH=MAX-SHOT) ∗m . Outputs the zero vector if r < 0 s̃← s̃+ l + r m← m2 . Adjust the next iteration’s smoothing
end for end procedure
θ′ ← θ . Init the embedding weights from the pre-trained embeddings for 1 ≤ k ≤ MAX-SHOT do . Init the FiLM params from the pre-trained batch norm params
γ′(k)← γ β′(k)← β
end for while validation accuracy is improving do
Sample a training episode with support set S and query set Q Let k1, . . . kN be the shots of the episode’s classes. s← ZEROS(MAX-SHOT) . Init the (unnormalized) shot distribution for each class i do
si ← ONE-HOT(ki, DEPTH = MAX-SHOT) si ← SMOOTH-SHOT(si,m,MAX-SHOT) . Smooth the one-hot shot of class i s← s+ si end for s← s÷ SUM(s) . Normalize to get the episode’s shot distribution γ′s ← sT γ′ . Select the FiLM params for the episode β′s ← sTβ′ Let SH = {f(x; θ′, γ′s, β′s), y}(x,y)∈S . The embedded support set Let QH = {f(x; θ′, γ′s, β′s), y}(x,y)∈Q . The embedded query set L ← 1|QH | ∑ (h∗,y∗)∈QH − log p(y∗ | h∗,SH) . Compute the episode’s loss θ′ ← θ′ − ∂L ∂θ′ . Update the model via gradient descent γ′ ← γ′ − ∂L ∂γ′ β′ ← β′ − ∂L ∂β′
end while
Hypothesis testing We follow the same procedure as in (Triantafillou et al., 2020) to compute ranks for different methods that in turn determine which entries to bold in our tables. Specifically, we perform a 95% confidence interval statistical test on the difference between the mean accuracies of pairs of entries of each row. If for two entries we are not able to reject the null hypothesis that the difference between their means is 0, they will receive the same rank. For example, if model A and model B are tied for the first place according to that test, they will each receive the rank 1.5 (the average of the ranks 1 and 2). If we are able to reject that hypothesis, however, the entry with the larger mean accuracy will receive a higher rank than the other. In each row, we bold the entries that are tied for the highest rank. For convenience, in the last section of the Appendix, we show a more heavily-annotated copy of each table in the paper to make the rank computation procedure more transparent.
ADDITIONAL RESULTS
First, we provide in Figure 5 additional plots to cover more evaluation shot settings than those shown in Figure 2 in the main paper. The setup for this is exactly the same as for Figure 2.
Next, since we observe that the ‘Best k-shot’ baseline performs well in Table 1, which reflects average performance across episodes of varying shots, we further break down its performance different ranges of shots in Figure 6. We find that while indeed ‘Best k-shot’ performs well for large shots, it actually performs poorly for low shots. This finding strengthens our case against this baseline: not only is it computationally expensive, requiring training multiple different models to pick the one that performs best, but is also is not as consistent as SCONE in its performance on different shot ranges.
Finally, to place our results into context, we display the results of Meta-Baseline with SCONE alongside the performance of recent work, controlling for model capacity, in Table 3. The approaches we compare against are: Classifier-Baseline (Chen et al., 2020), SUR-pf (Dvornik et al., 2020), TaskNorm (Bronskill et al., 2020) and Simple CNAPs (Bateni et al., 2020). In particular, we report the performance of the parametric family of SUR (‘SUR-pf’) instead of full SUR (which has 8x more parameters), in order to make apples-to-apples comparisons with the remaining approaches. We find that the Meta-Baseline method, when combined with SCONE, achieves state-of-the-art on Meta-Dataset in this context, according to the average rank metric.
EXPERIMENTAL DETAILS
We plan to open source our code upon publication, including all experimental details. In the meantime, we outline these details below for completeness.
Architecture We use ResNet-18 as the feature extractor for all of our experiments, following the implementation in (Triantafillou et al., 2020). For the SCONE variants, we add FiLM to all of the batch normalization layers throughout the network.
Image processing For all experiments, we use Meta-Dataset’s input pipeline to obtain images, and we follow the image processing performed in (Chen et al., 2020) which yields images of size 128 x 128. We apply standard data augmentation consisting of horizontal flipping and random cropping followed by standardization using a commonly-used mean and standard deviation as in (Chen et al., 2020). For episodic models, data augmentation is applied in both the support and query sets. No data augmentation is used at validation nor test time.
Optimization We use ADAM with exponential learning rate decay and weight decay of 1e− 8 to optimize all models in this work. We tune the initial learning rate, the decay rate, and the number of updates between each learning rate decay separately for each model presented in the paper. The initial learning rate values we considered are 0.0005 and 0.001, with a decay factor of 0.8 or 0.9 applied every 1000, 2000, 3000 steps. We ran a variant for every combination of those values. We also tune the weight decay applied to the FiLM parameters (for SCONE variants) or the batch normalization parameters (for non-SCONE variants). We tried the values: 1e− 8, 1e− 6, 1e− 4.
SCONE hyperparameters For the SCONE variants, aside from the above hyperparameters, we additionally tune the smoothing parameter m described in the main paper that is used for training and for evaluation. We did not tune the MAX-SHOT hyperparameter mentioned in the main paper as we found that our initial choices worked reasonably. Specifically, we set it to 40 for the smaller-scale experiments where the maximum shot was 40, and to 200 for the large-scale experiments. The latter choice was performed heuristically since shots much larger than 200 are unlikely under the shot distribution induced by Meta-Dataset’s episode generator. For more information on that shot distribution, we refer the reader to the next section.
SCONE smoothing hyperparameter We tuned the value of this hyperparameter that will be used both at training and at evaluation. At training time, we considered values in the range 0, 0.2, 0.4, 0.6, 0.9 for Prototypical Network experiments, and we picked the variant that worked best according to the validation performance that was computed without smoothing. Once the model was trained and all of the remaining hyperparamters were tuned, we performed a final validation round to tune the evaluation-time smoothing that will be used in the chosen model. We found it beneficial to use larger values here, picking the value of 1− 1e− 06 for example for the Prototypical Network on ImageNet. In the Meta-Baseline codebase, we trained with larger values of smoothing (the best we found was 1− 1e− 10) and didn’t find it beneficial to additionally smooth at evaluation time.
Model selection For each experiment, we perform early stopping according to the performance on the validation set. For the models that train on a single shot k in the smaller-scale experiments, the validation performance that we monitor for early stopping is the average query set accuracy on k-shot 5-way episodes drawn from the validation set. For the models in the small-scale experiments that train on a distribution of shots, we use the average validation performance over 5-way episodes whose shot is sampled according to the same distribution used for training the respective model. For the larger-scale Meta-Dataset experiments, we draw validation episodes only from the validation set of ImageNet for the experiments that train on ImageNet only, or from the validation sets of all datasets for the experiments that train on all datasets. In both cases, the validation episodes are drawn using Meta-Dataset’s episode generator that yields episodes of variable ways and variable shots with class imbalance. In all cases, the average validation performance is computed over 600 validation episodes and is monitored every 2K training updates. We apply exponential smoothing to the resulting validation "curve" (using the default value of 0.6 in TensorBoard). Then, we choose the update step at which the highest peak of that curve is found and we use the checkpoint corresponding to that update step for testing.
DISTRIBUTION OF SHOTS IN META-DATASET EPISODES
For reference, Figure 7 displays histograms of the number of shots produced by Meta-Dataset’s episode sampling algorithm. These are computed by sampling 600 episodes per dataset for each of the training, validation and test splits of Meta-Dataset.
TABLES WITH MORE DETAILED RANKS
In this section, we include a copy of the same tables appearing previously in the paper, but additionally annotated with per-row ranks, to make the rank computation method more transparent. These tables are Table 4, Table 5 and Table 6. | 1. What is the main contribution of the paper regarding few-shot learning?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. How does the reviewer assess the novelty and significance of the proposed approach?
4. Are there any concerns or doubts regarding the motivation and necessity of studying varying-shot learning?
5. What are the limitations of the experimental setup and comparisons with state-of-the-art methods? | Review | Review
This paper proposed an implementation method of using different numbers of shots of data for few-shot learning such as to mitigate the negative effect of "different shots". It optimized the FiLM parameters using meta gradient descent during episodic meta-training with different-shot learning tasks. It conducted the experiments on quite a set of "meta-dataset benchmarks".
Pros:
The study of varying-shot task learning could be interesting, as the quantity of data in each task really matters for few-shot learning.
"Conditional on the shot number" is a reasonable idea.
Cons:
The motivation for conducting such a work is not new, and more importantly, the proposed method is too incremental compared to many existing works. (1) An existing work [Cao et al. ICLR’20] (also cited by this submission) studied the same problem both theoretically and empirically and proposed a similar linear transformation method to handle the varying-shot learning issue (Please kindly refer to section 3.3 in [Cao et al. ICLR’20]). This submission makes an incremental contribution of meta-learning transformation parameters conditional on shot numbers (using an existing method called FiLM). (2) Regardless of using such conditional, the idea of meta-learning feature transformation parameters (i.e. scaling and shifting parameters) has been well-deployed in existing few-shot learning papers [Sun et al. CVPR’19] (not cited in this submission) and [Oreshkin et al., 2018; Requeima et al., 2019; Bateni et al., 2019]. While as I can see from some other papers from the same authors of these related works, this transformation seems to be working well in many recent and similar tasks such as class-incremental learning (different numbers of samples for old or new classes) [Liu et al. CVPR’20] and semi-supervised few-shot learning tasks [Li et al. NeurIPS’19]. (3) Conditional on a single scalar (
k
in this submission) is also not new in meta-learning, please kindly check [Shu et al. NeurIPS’19]. While I admit that this “conditional” is somehow the main contribution proposed in this submission for the few-shot settings. But I don’t think it is significant.
The experiments lack comparisons to the sota few-shot learning methods (in the same setting of using varying-shot tasks in episodic training). This is a very clear weakness of this paper. The paper does not prove that using the proposed conditional meta-training can achieve better performance of varying-shot learning over the sota few-shot learning methods. Table 1 makes me feel that this paper proposes a new BN method.
I want to talk more about varying shots. Actually, for different classes (different levels of hardness or diversity among the classes), the model may need different numbers of samples to learn a good representation for each of them (e.g. the common sense is that a model needs more samples for the non-rigid class "person" than the rigid class "spoon"). Given the high randomness of the class sampled in any meta-learning task, it is not clear how many shots a task really needs (for its all classes). So, I am a bit doubting "is it necessary to study the problems of varying-shot learning".
[Sun et al. CVPR’19] Meta-Transfer Learning for Few-Shot Learning
[Liu et al. CVPR’20] Mnemonics Training: Multi-Class Incremental Learning without Forgetting (details about feature transformation are given in the supplementary as the transformation method is not the contribution in this CVPR’20 paper)
[Li et al. NeurIPS’19] Learning to Self-Train for Semi-Supervised Few-Shot Classification (details about feature transformation are given in the supplementary as the transformation method is not the contribution in this NeurIPS’19 paper)
[Shu et al. NeurIPS’19] Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting (please check fig1(c) and its explanations) |
ICLR | Title
Scaling Laws For Deep Learning Based Image Reconstruction
Abstract
Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems. Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains. In this work, we study whether major performance gains are expected from scaling up the training set size. We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size. For all three tasks we find that an initially steep powerlaw scaling slows significantly already at moderate training set sizes. Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance. To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent. The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
1 INTRODUCTION
Deep neural networks trained to map a noisy measurement of an image or a noisy image to a clean image give state-of-the-art (SOTA) performance for image reconstruction problems. Examples are image denoising (Burger et al., 2012; Zhang et al., 2017; Brooks et al., 2019; Liang et al., 2021), super-resolution (Dong et al., 2016; Ledig et al., 2017; Liang et al., 2021), and compressive sensing for computed tomography (CT) (Jin et al., 2017), and accelerated magnetic resonance imaging (MRI) (Zbontar et al., 2018; Sriram et al., 2020; Muckley et al., 2021; Fabian & Soltanolkotabi, 2022).
The performance of a neural network for imaging is determined by the network architecture and optimization, the size of the network, and the size and quality of the training set.
Significant work has been invested in architecture development. For example, in the field of accelerated MRI, networks started out as convolutional neural networks (CNN) (Wang et al., 2016), which are now often used as building blocks in un-rolled variational networks (Hammernik et al., 2018; Sriram et al., 2020). Most recently, transformers have been adapted to image reconstruction (Lin & Heckel, 2022; Huang et al., 2022; Fabian & Soltanolkotabi, 2022).
However, it is not clear how substantial the latest improvements through architecture design are compared to potential improvements expected by scaling the training set and network size. Contrary to natural language processing (NLP) models and modern image classifiers that are trained on billions of examples, networks for image reconstruction are only trained on hundreds to thousands of example images. For example, the training set of SwinIR (Liang et al., 2021), the current SOTA for image denoising, contains only 10k images, and the popular benchmark dataset for accelerated MRI consists only of 35k images (Zbontar et al., 2018).
In this work, we study whether neural networks for image reconstruction only require moderate amounts of data to reach their peak performance, or whether major boosts are expected from increasing
the training set size. To partially address this question, we focus on three problems: image denoising, reconstruction from few and noisy measurements (compressive sensing) in the context of accelerated MRI, and super-resolution. We pick Gaussian denoising for its practical importance and since it can serve as a building block to solve more general image reconstruction problems well (Venkatakrishnan et al., 2013). We pick MR reconstruction because it is an important instance of a compressive sensing problem, and many problems can be formulated as compressive sensing problems, for example super-resolution and in-painting. In addition, for MR reconstruction the question on how much data is needed is particularly important, since it is expensive to collect medical data.
For the three problems we identify scaling laws that describe the reconstruction quality as a function of the training set size, while simultaneously scaling network sizes. Such scaling laws have been established for NLP and classification tasks, as discussed below, but not for image reconstruction.
The experiments are conducted with a U-Net (Ronneberger et al., 2015) and the SOTA SwinIR (Liang et al., 2021), a transformer architecture. We primarily consider the U-Net since it is widely used for image reconstruction and acts as a building block in SOTA models for image denoising (Brooks et al., 2019; Gurrola-Ramos et al., 2021; Zhang et al., 2021; 2022) and accelerated MRI (Zbontar et al., 2018; Sriram et al., 2020). We also present results for denoising with the SwinIR. The SwinIR outperforms the U-Net, but we find that its scaling with the number of training examples does not differ notably. Our contributions are as follows:
• Empirical scaling laws for denoising. We train U-Nets of sizes 0.1M to 46.5M parameters with training set sizes from 100 to 100k images from the ImageNet dataset (Russakovsky et al., 2015) for Gaussian denoising with 20.17dB Peak-Signal-to-Noise ratio (PSNR). While for the largest training set sizes and network sizes we consider, performance continues to increase, the rate of performance increase for training set sizes beyond a few thousand images slows to a level indicating that even training on millions of images only yields a marginal benefit, see Fig. 1(a).
We also train SOTA SwinIRs of sizes 4M to 129M parameters on the same range of training set sizes, see Fig. 2(a). Albeit it performs better than the U-Net, its scaling behavior is essentially equivalent, and again after a few thousand images, only marginal benefits are
expected. Scaling up its training set and network size did, however, give benefits, the largest model we trained yields new SOTA results on four common test sets by 0.05 to 0.22dB.
• Empirical scaling laws for compressive sensing. We train U-Nets of sizes from 2M to 500M parameters for 4x accelerated MRI with training set sizes from 50 to 50k images from the fastMRI dataset (Knoll et al., 2020b). We again find that beyond a dataset size of about 2.5k, the rate of improvement as a function of the training set size considerably slows, see Fig. 1(c). This indicates that while models are expected to improve by increasing the training set size, we expect that training on millions of images does not significantly improves performance.
• Empirical scaling laws for super-resolution. We also briefly study super-resolution, and similarly as for denoising and compressive sensing find a slowing of the scaling law at moderate dataset sizes. See appendix C.
• Understanding scaling laws for denoising theoretically. Our empirical results indicate that the denoising performance of a neural network trained end-to-end doesn’t increase as a function of training examples beyond a certain point. This is expected since once we reach a noise specific error floor, more data is not beneficial. We make this intuition precise for a linear estimator learned with early stopped gradient descent to denoise data drawn from a d-dimensional linear subspace. We show that the reconstruction error is upper bounded by d/N plus a noise-dependent error floor level. Once the error induced by learning the signal model, d/N , is small relative to the error floor, more training examples N are not beneficial.
Together, our empirical results show that neural networks for denoising, compressive sensing, and super-resolution applied in typical setups (i.e, Gaussian denoising with 20.17dB PSNR, and multi-coil accelerated MRI with 4x acceleration) already operate in a regime where the scaling laws for the training set size are slowing significantly and thus even very large increases of the training data are not expected to improve performance substantially. Even relatively modest model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
2 RELATED WORK
Scaling laws for prediction problems. Under the umbrella of statistical learning theory convergence rates of 1/N or 1/ √ N have been established for a range of relatively simple models and distributions, see e.g. Wainwright (2019) for an overview.
For deep neural networks used in practice, a recent line of work has empirically characterized the performance as a function of training set size and/or network size for classification and NLP (Hestness et al., 2017; Rosenfeld et al., 2019; Kaplan et al., 2020; Bahri et al., 2021; Zhai et al., 2021; Ghorbani et al., 2021; Bansal et al., 2022). In those domains, the scaling laws persist even for very large datasets, as described in more detail below. In contrast, for image reconstruction, we find that the power-law behavior already slows considerably at relatively small numbers of training examples.
Rosenfeld et al. (2019) find power-law scaling of performance with training set and network size across models and datasets for language modeling and classification. However, because the work fixes either training set or network size, while scaling the other, the scaling laws span only moderate ranges before saturating at a level determined by the fixed quantity and not the problem specific error floor. The papers Kaplan et al. (2020); Bahri et al. (2021); Zhai et al. (2021); Ghorbani et al. (2021); Bansal et al. (2022) including ours scale the dataset size and model size simultaneously, resulting in an improved predictive power of the obtained scaling laws.
Hestness et al. (2017) study models for language and image classification, and attribute deviations from a power-law curve to a lack of fine-tuning the hyperparameters of very large networks. For transformer language models Kaplan et al. (2020) find no deviation from a power-law for up to a training set and network size of 1B images and parameters. Further, Zhai et al. (2021) find the performance for Vision Transformers (Dosovitskiy et al., 2020) for few-shot image classification to deviate from a power-law curve only at extreme model sizes of 0.3-1B parameters and 3B images.
The role of training set size in inverse problems. For image reconstruction and inverse problems in general, we are not aware of work studying scaling laws in a principled manner, covering different
problems and model architectures. But, when proposing a new method several works study performance as a function of training set and/or network size. However, those studies typically only scale the parameter of interest while fixing all other parameters, unlike our work, which scales dataset size and network size together, which is important for identifying scaling laws. Below, we review the training set sizes, and if available their scaling properties, used by the recent SOTA in image denoising and accelerated MRI.
Zhang et al. (2017) report that for DnCNN, a standard CNN, using more than 400 distinct images with data augmentation only yields negligible improvements. Chen et al. (2021) pre-train an image processing transformer (IPT) of 115.5M network parameters on ImageNet (1.1M distinct images), and report the performance after fine-tuning to a specific task as a function of the size of the pretraining dataset. IPT’s performance for denoising is surpassed by the latest SOTA in form of the CNN based DRUnet (Chen et al., 2021), the transformer based SwinIR (Liang et al., 2021) and the Swin-Conv-Unet (Zhang et al., 2022) a combination of the two. Those models have significantly fewer network parameters and were trained on a training set consisting of only ∼10k images, leaving the role of training set size in image denoising open.
The number of available training images for accelerated MRI is limited to few publicly available datasets. Hence, current SOTA (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022) are trained on the largest datasets available (the fastMRI dataset), consisting of 35k images for knee and 70k for brain reconstruction (Zbontar et al., 2018; Muckley et al., 2021).
Adaptations of the MLP-Mixer (Tolstikhin et al., 2021) and the Vision Transformer (Dosovitskiy et al., 2020) to MRI (Mansour et al., 2022; Lin & Heckel, 2022) ablate their performance as a function of the training set size and find that non-CNN based approaches require more data to achieve a similar performance, but potentially also benefit more from even larger datasets. However, those experiments are run with fixed model sizes and only 3-4 realizations of training set size thus it is unclear when performance saturates.
3 EMPIRICAL SCALING LAWS FOR DENOISING
In this section, we consider the problem of estimating an image x from a noisy observation y = x+z, where z is additive Gaussian noise z ∼ N (0, σ2zI). Neural networks trained end-to-end to map a noisy observation to a clean image perform best, and outperform classical approaches like BM3D (Dabov et al., 2007) and non-local means (Buades et al., 2005) that are not based on training data.
Datasets. To enable studying learned denoisers over a wide range of training set sizes we work with the ImageNet dataset (Russakovsky et al., 2015). Its training set contains 1.3M images of 1000 different classes. We reserve 20 random classes for validation and testing. We design 10 training set subsets SN of sizes N ∈ [100, 100000] (see Fig. 1(a)) and with Si ⊆ Sj for i ≤ j. To make the distributions of images between the subsets as homogeneous as possible, the number of images from
different classes within a subset differ by at most one. We generate noisy images by adding Gaussian noise with variance of σz = 25 (pixels in the range from 0 to 255), i.e. 20.17dB in PSNR.
Model variants and training. We train a CNN-based U-Net, a detailed description of the model architecture is in Appx. A.1. We also train the Swin-Transformer (Liu et al., 2021) based SwinIR model (Liang et al., 2021) that achieves SOTA for a variety of image reconstruction tasks, including denoising. This model is interesting since transformers scale well with the number of training examples for other domains, e.g., for image classification pre-training (Dosovitskiy et al., 2020).
For U-Net and SwinIR we vary the number of network parameters as depicted in Fig. 1 (b) and Fig. 2 (b) respectively. Appx. A.1 and A.3 contain detailed descriptions on how the models are trained and the network size is adapted.
Results and discussion. Fig. 1(a) and Fig. 2(a) show the reconstruction performances of the best U-Net and SwinIR respectively over all considered network sizes. Our main findings are as follows.
A training set size of around 100 is already sufficient to train a decent image denoiser. Beyond 100, we can fit a linear power law to the performance of the U-Net with a scaling coefficient α = 0.0048 that approximately holds up to training set sizes of about 6k images. Beyond 6k, we can fit a second linear power law with significantly smaller scaling coefficient α = 0.0019.
Similarly, for the SwinIR we can fit a power law with coefficient α = 0.0050 in the regime of little training data and a power law with significantly smaller coefficient α = 0.0017 in the regime of moderate training data, thus the two architectures scale essentially equivalently.
While denoising benefits from more training examples, the drop in scaling coefficient indicates that scaling from thousands to millions of images is only expected to yield relatively marginal performance gains. While the gains are small and we expect gains from further scaling to be even smaller, our largest SwinIR trained on 100k images yields a new SOTA for Gaussian color image denoising. On four common test sets it achieves an improvement between 0.05 to 0.22dB, see Appx. B. Visualizations of reconstructions from models along the curves in Fig. 1(a) and Fig. 2(a) can be found in Fig. 4, Appx. A.
Comparing the U-Net scaling to SwinIR scaling (Fig. 2(a)) we see that the effect of large training sets on the performance of the U-Net can not make up for the improved modeling error that stems from the superior network architecture of the SwinIR. In fact, the simulated scaling coefficient α = 0.0019 predicts a required training set size of about 150M images for U-Net to achieve the best performance of the SwinIR, and that is an optimistic prediction since it assumes that the scaling does not slow down further for another 3 orders of magnitude. Thus, model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
An interesting question is whether different noise levels lead to different data requirements, as indicated by a different scaling behavior. We investigate the performance of U-Net for a smaller noise variance σz = 15 in Appx. D.2. While the results are still preliminary, the qualitative scaling behavior is similar; the difference are that the scaling law pertaining to smaller noise is slightly steeper, and overall the performance improves by about 1.3dB in PSNR.
Finally, note that for our results we choose to re-sample the noise per training image in every training epoch, since it makes the best use of the available clean images. However, fixing the noise is also interesting, since it is more similar to a setup in which the noise statistics are unknown like in real-world noise removal. See Appx. D.1 for additional results where the noise is fixed. We found that compared to re-sampling the noise the performance of a U-Net drops by 0.3dB for small training set sizes. As the training set size increases the performance approaches the one of re-sampling the noise resulting in a steeper scaling law that flattens at slightly larger training set sizes.
Robustness of our finding of slowing scaling laws. Our main finding for denoising is that the scaling laws slow at relatively small amounts of training data points (i.e., the scaling coefficient becomes very small). We next argue why we think that this is a robust finding.
In principle it could be that the networks are not sufficiently large for the performance to improve further as the dataset size increases. However, for the U-Net and training set sizes of 10k and 30k the
performance increase already slows, and for those training set sizes, increasing the network size does not improve performance, as shown in Fig. 1(b). For smaller network sizes the curves in Fig. 1(b) first increase and then decrease indicating that we found network sizes close to the optimum.
The parameter scaling of the SwinIR in Fig. 2(b) indicates that for training set sizes of 10k and 100k the performance still benefits from larger networks. Hence, the power law for those training set sizes in Fig. 2(a) can become slightly steeper once we further increase the network size. However, for the slope of the flat power law in (a) to reach the slope of the steep power law, the parameter scaling in (b) for 100k training images would need to hold for another 2 orders of magnitude, i.e. about 12B instead of the currently used 129M network parameters, which is very unlikely considering that the current network sizes already suffice to saturate the performance for small/moderate training set sizes.
Finally, it could be that with higher quality or higher diversity of the data, the denoising performance would further improve. In our experiments, we increase the training set sizes by adding more images from the same classes of ImageNet, and we continue to test on different classes. Thus, it could be that adding training examples from a different, more diverse data source leads to less of a slowing of the scaling law in Fig. 1(a). We test this hypothesis by scaling our training set with 3k images to 10k images not by adding more images from ImageNet but images from the datasets that are used to train the original SwinIR, i.e., we add all images from DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017) and BSD500 (Arbeláez et al., 2011) and add the remaining images from WED (Ma et al., 2017). Keeping all other hyperparameters fixed the U-Net obtains for this dataset a PSNR of 32.239dB, which is slightly worse than the 32.252dB it achieves with our training set with 10k images only from ImageNet. Hence, we reject the hypothesis that the drop in scaling coefficients can be explained with how the training sets are designed in our experiment.
4 EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING
We consider compressive sensing (CS) to achieve 4x accelerated MRI. MRI is an important medical imaging technique for its non-invasiveness and high accuracy. Due to the physics of the measurement process, MRI is inherently slow. Accelerated MRI aims at significantly reducing the scan time by undersampling the measurements. In addition, MRI scanners rely on parallel imaging, which uses multiple receiver coils to simultaneously collect different measurements of the same object.
This leads to the following reconstruction problem. We are given measurements yi ∈ Cm of the image x ∈ Cn as yi = MFSix+ noisei for i = 1, ..., C, and our goal is to reconstruct the image from those measurements. Here, C is the number of receiver coils, the diagonal matrices Si ∈ Cn×n are the sensitivity maps modelling the signal strength perceived by the i-th coil, F ∈ Cn×n is the discrete Fourier transform (DFT), and M ∈ Cm×n is a binary mask implementing the undersampling. Classical CS approaches (Lustig et al., 2008) first estimate the sensitivity maps with a method such as ESPIRiT (Uecker et al., 2014), and second estimate the unknown image by solving a regularized optimization problem, such as total-variation norm minimization. Recently, deep learning based methods have been shown to significantly outperform classical CS methods due to their ability to learn more complex and accurate signal models from data (Knoll et al., 2020a; Muckley et al., 2021).
Datasets. To explore the performance of learning based MR reconstruction over a wide range of training set sizes, we use the fastMRI multi-coil brain dataset (Knoll et al., 2020b), which is the largest publicly available dataset for MRI. The dataset consists of images of different contrasts. To ensure that the statistics of the dataset are as homogeneous as possible, we take the subset of the dataset corresponding to a single contrast resulting in 50k training images. We design training set subsets SN of size N ∈ [50, 50000] (see Fig. 1 (c)) with Si ⊆ Sj for i ≤ j. For more information on selection, division and subsampling of the dataset see Appx. A.2.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with 4 blocks per encoder/decoder. The network is trained end-to-end to map a coarse reconstruction x̂CR to the ground truth image x. The coarse reconstruction is obtained as
x̂CR = (∑C i=1 |F−1yi,ZF|2 )1/2
,where F−1 is the inverse DFT, and yi,ZF are the undersampled measurements of the i-th coil, where missing entries are filled with zeros. We vary the number of
network parameters as depicted in Fig. 1 (d). Appx. A.2 contains detailed descriptions on how the model is trained and the network size is adapted.
Results and discussion. For each training set size Fig. 1 (c) shows the reconstruction performance in structural similarity (SSIM) of the best model over all simulated network sizes. There are 2 main findings. First, we can fit a linear power law with a scaling coefficient α = 0.0072 that holds up to training set sizes of about 2.5k images. Second, for training set sizes starting from 5k we can fit a second linear power law but with significantly smaller scaling coefficient α = 0.0026.
Similar to denoising in Section 3 we conclude that while accelerated MRI slightly benefits form more training examples, the drop in scaling coefficient indicates that it is unlikely that even training set sizes of the order of hundreds of millions of images result in substantial gains. Visualizations of reconstructions from models along the curve in Fig. 1 (c) can be found in Fig. 5, Appx. A.2.
Robustness of our finding of slowing scaling laws. Next, we argue why the drop in scaling coefficient for accelerated MRI is a robust finding.
In Fig. 1 (d) we demonstrate that the network size does not bottleneck the performance of our largest training sets. For each training set size the performance as a function of the number of network parameters is relatively flat. Even small training sets benefit from large networks before their performance slightly decays for very large networks. Hence, we expect that for large training sets the performance as a function of network parameters would not increase significantly before decaying.
Finally, is there a different type of training data that would improve the scaling coefficient? We don’t think so, since all experiments are for the very specific task of reconstructing brain images of one particular contrast, which means that the examples that we add to increase the size of the training sets is already the data most suitable to solve this task.
5 UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY
We study a simple linear denoiser that is trained end-to-end to reconstruct a clean image from a noisy observation, in order to understand how the error as a function of the number of training examples for inverse problems is expected to look like.
We define a joint distribution over a signal and the corresponding measurement (x,y) as follows. Consider a d < n dimensional subspace of Rn parameterized by the orthonormal basis U ∈ Rn×d. We draw a signal approximately uniformly from the subspace x = Uc, where c ∼ N (0, I), and an associated noisy measurement as y = x+ z, where z ∼ N (0, σ2zI) is Gaussian noise. The subspace is unknown, but we are given a training set {(x1,y1), . . . , (xN ,yN )}, consisting of examples drawn iid from the joint distribution over (x,y).
We assume that the signal lies in a low dimensional subspace. Assuming that data lies in a lowdimensional subspace or more general, a union of low-dimensional subspaces, is common, for example it underlies the denoising of natural images via wavelet thresholding (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). Mohan et al. (2020) found that even deep learning based denoisers implicitly perform a projection onto an adaptively-selected low-dimensional subspace that captures the features of natural images.
We consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error, defined as R(W) = 1dE [ ∥Wy − x∥22 ] , where
expectation is over the joint distribution of (x,y).
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk R) is given by W∗ = 11+σ2z UU
T . The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
An estimator based on subspace estimation. The optimal estimator W∗ requires knowledge of the unknown subspace. We can learn the subspace from the noisy data by performing principal component analysis on Y = [y1, . . . ,yN ]. Specifically, we estimate the subspace as the d-leading
singular vectors of the empirical co-variance matrix YYT ∈ Rn×n, denoted by by Û ∈ Rn×d. If the measurement noise is zero (i.e., σ2z = 0), Û is an orthonormal basis for the d-dimensional subspace U provided we observe at least d many linearly independent signals, which occurs with probability one if we draw data according to the model defined above, and if the number of training examples obeys N ≥ d. There is a vast literature on PCA; see Vershynin (2011; 2018); Rudelson & Vershynin (2010); Tropp (2012) for tools from high-dimensional probability to analyze the PCA estimate.
Now consider the estimator WPCA = 11+σ2z ÛÛ Ty based on N noisy training points. The estimator assumes knowledge of the noise variance, but it is not difficult to estimate it relatively accurately. The following result, proven in Appx. F, characterizes the associated risk. Theorem 1. Suppose that the number of training examples obeys (d + nσ2z) log(n) ≤ N . For a numerical constant c, with probability at least 1− n−10 − 3e−d + e−n, the risk of the PCA-estimate is bounded by
R (WPCA) ≤ R(W∗) + c(d+ nσ2z) log(n)/N.
Thus, as long as the number of training examples N is sufficiently large relative to (d+ nσ2z), the risk of the PCA-estimator is close to the risk of the optimal estimator.
Estimator learned end-to-end. We now consider an estimator learned end-to-end, by applying gradient descent to the empirical risk L(W) = ∑N i=1 ∥Wyi − xi∥ 2 2 and we regularize via earlystopping the gradient descent iterations. The risk of the estimate after k iterations of gradient descent, Wk, is bounded by the next result. This estimator mimics the supervised training we consider throughout this section, with the difference that here we consider a simple neural network, and in the previous sections we trained a neural network. Theorem 2. Let Wk be the matrix obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W). Consider the regime where the number of training examples obeys (d + nσ2z) log(n) ≤ N ≤ ξd/σ2z for an arbitrary ξ and N log(N) ≤ n. For an appropriate choice of the stepsize η there exists an optimal early stopping time kopt at which the risk of the estimator fW(y) = Wkopty is upper-bounded with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2 by
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N ,
where c is a numerical constant.
The proof is provided in Appx. G. Similar to Thm. 1 the first term in the bound corresponds to the noise-dependent error floor and the second two terms decrease in (d + nσ2z)/N and represent the error induced by learning the estimator with a finite number of training examples. Once this error is small relative to the error floor, more training examples do not improve performance. See Appx. E for a more detailed discussion on the estimator and the assumptions of Thm. 2.
Discussion. In Fig. 3, we plot the risk as a function of the training set size for the early-stopped empirical risk minimization (ERM) and the PCA based estimator. Starting at a training set size of N = 100, we see that the risk minus the optimal risk follows approximately a power law, i.e., log(R(W)−R(W∗)) ≈ −α log(N), as suggested by Thms. 1 and 2. In practice, however, we don’t know the risk of the optimal estimator, R(W∗). Therefore, in the second row of Fig. 3 and throughout our empirical results we plot the risk log(R(W)) as a function of the number of training examples log(N) and distinguish different regions by fitting approximated scaling coefficients α to log(R(W)) ≈ −α log(N). In our theoretical example, we can identify three regions, coined in Hestness et al. (2017): The small data region in which the training set size does not suffice to learn a well-performing mapping and the power-law region in which the performance decays approximately as Nα with α < 0, and an problem-specific irreducible error region, which is R(W∗) here. Note that the power-law coefficient in the risk plot R(W) are smaller then those when plotting R(W)−R(W∗), an are only approximate scaling coefficients. Already in this highly simplified setup of an inverse problem the true scaling coefficients heavily depend on model parameters such as the noise variance as can be seen in Fig. 3. The scaling coefficients further vary with the signal dimension d and ambient dimension n (see Appx. E.1).
6 CONCLUSION
We found that the performance improvement of deep learning based image reconstruction as a function of the number of training examples slows already at moderate training set sizes, indicating that only marginal gains are expected beyond a few thousand examples.
Limitations. This finding is based on studying three different reconstruction problems (denoising, super-resolution, and compressive sensing), two architectures (U-net and SwinIR), and for each setup we extensively optimized hyperparameters and carefully scaled the networks. Scaling gave new state-of-the-art results on four denoising datasets, which provides some confidence in the setup.
Moreover, our statements necessarily pertain to the architectures and metrics we study. It is possible that other architectures or another scaling of architectures can yield larger improvements when scaling architectures. It is also widely acknowledged that image quality metrics (such as SSIM and PSNR) do not fully capture the perceived image quality by humans, and it could be that more training data yields improvements in image quality that are not captured well by SSIM and PSNR.
Most importantly, our findings pertain to standard in-distribution evaluation, i.e., the test images are from the same distribution as the training images. To achieve robustness against testing on images out of the training distribution recent work indicates a positive effect of larger and more diverse training sets (Miller et al., 2021; Darestani et al., 2022; Nguyen et al., 2022). Thus it is possible that the out-of-distribution performance of image reconstruction methods improves when trained on a larger and a more diverse dataset, even though the in-distribution performance does not improve further.
Future research. We focus on supervised image reconstruction methods. For image denoising and other image reconstruction problems self-supervised approaches are also performing very well (Laine et al., 2019; Wang et al., 2022; Zhou et al., 2022), even though supervised methods perform best if clean images are available. It would be very interesting to investigate the scaling laws for selfsupervised methods; perhaps more training examples are required to achieve the performance of a supervised setup. We leave this for future work.
REPRODUCIBILITY
The repository at https://github.com/MLI-lab/Scaling_Laws_For_Deep_ Learning_Based_Image_Reconstruction contains the code to reproduce all results in the main body of this paper.
ACKNOWLEDGMENTS
The authors acknowledge support by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456465471, 464123524, the DAAD, and the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
A DETAILS OF THE EXPERIMENTAL SETUPS
A.1 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a U-Net presented in Fig.1 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 1 (a). In these examples the improvement in perceived image quality from increasing the training set size from 100 to 1000 is larger than from increasing from 1000 to 10000 or from 10000 to 100000. This correlates with our quantitative findings in Fig. 1 (a).
Next, we describe the experimental details. We train U-Nets with two blocks in the encoder and decoder part respectively and skip connections between blocks. Each block consists of two convolutional layers with LeakyReLU activation and instance normalization (Ulyanov et al., 2017) after every layer, where the number of channels is doubled (halved) after every block in the encoder (decoder). The downsampling in the encoder is implemented as average pooling and the upsampling in the decoder as transposed convolutions. As proposed by Zhang et al. (2017) we train a residual denoiser that learns to predict y − x instead of directly predicting x, which improves performance. We scale the network size by increasing the width of the network by scaling the number of channels per (transposed) convolutional layer. For denoising we trained U-Nets of 7 different sizes. We vary the number of channels in the first layer in {16, 32, 64, 128, 192, 256, 320}, which corresponds to {0.1, 0.5, 1.9, 7.4, 16.7, 30.0, 46.5} million parameters. We do not vary the depth, since this would change the dimension of the informational bottleneck in the U-Net and thus change the model family itself.
The exact training set sizes we consider are {0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60, 100} thousand images from ImageNet. We center crop the training images to 256 × 256 pixels. We also tried a smaller patch size of 128× 128 pixels, but larger patches showed to have a better performance in the regime of large training set sizes, which is why the results presented in the main body are for patch size 256× 256. For a comparison between the scaling behavior of the two different patch sizes see Fig. 9, Appx. D.3.
We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. For validation and testing we use 80 and 300 images respectively taken from 20 random classes that are not used for training.
We use mean-squared-error loss and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999. For moderate training set sizes up to 3000 images we find that heuristically adjusting the initial learning rate with the help of an automated learning rate annealing performs well. To this end, we start with a learning rate of 10−4 and increase after every epoch by a factor of 2 until the validation loss does not improve for 3 consecutive epochs. We then load the model checkpoint from the learning rate that was still performing well and continue with that learning rate. We observe that this scheme typically picks an initial learning rate reduced by factor 2 for every increase in the number of channels C. For training set sizes larger than 3000 we directly apply this rule to pick an initial learning rate without annealing as we found this to give slightly better results. In particular, for number of channels C = {128, 192, 256, 320} we used initial learning rates η = {0.0032, 0.0016, 0.0008, 0.0004}. For small training sets up to 1000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10 and found that further increasing the batch size does not improve performance.
We do not put a limit on the amount of compute for training. We use an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 10 epochs or 6 epochs for training set sizes starting from 6000 images. Once the learning rate drops to 10−5 we observe near to no gains in validation loss and stop the training after 10 additional epochs.
For training set sizes up to 1000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1800 GPU hours for the experiments in Fig. 1(a),(b).
A.2 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for compressive sensing MRI presented in Fig.1 (c),(d) and Section 4.
In addition, Fig. 5 shows examples of reconstructions from different models along the performance curve in Fig. 1 (c). In these examples the improvement in perceived image quality from increasing the training set size from 500 to 2500 is larger than from increasing from 2500 to 10000 or from 10000 to 50000. This correlates with our quantitative findings in Fig. 1 (c).
The first in row in Fig. 5 shows an example in which all models including the one trained on the largest training set fail to recover a fine detail. This is a known problem in accelerated MRI and has been documented in both editions of the fast MRI challenge (Knoll et al., 2020a; Muckley et al., 2021) in which for all methods examples could be found in which fine details have not been recovered. However, the question remains if more training data or better models would help or the details are simply not there since the information is lost due to the large undersampling factors considered in the fastMRI challenges and in this work.
Next, we describe the experimental details. For compressed sensing in the context of accelerated MRI we trained U-Nets of 14 different sizes. We vary the number of channels in the first layer in {16, 32, 48, 64, 96, 112, 128, 144, 160, 176, 192, 208, 224, 256}, which corresponds to {2, 8, 18, 31, 70, 95, 124, 157, 193, 234, 279, 327, 380, 496} million network parameters. The exact training set sizes we consider are {0.05, 0.25, 0.5, 1, 2.5, 5, 10, 25, 50} thousand AXT2 weighted images from fastMRI multi-coil brain dataset (Zbontar et al., 2018), where AXT2 corresponds to all images of one type of contrast. We focused on images only from this type to make the statistics of our datasets as homogeneous as possible. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. We use 4732 and 730 additional images for testing and validation.
We consider an acceleration factor of 4 meaning that we only measure 25% of the information. We obtain the 4 times undersampled measurements by masking the fully sampled measurement y with an equispaced mask with 8% center fractions meaning that in the center of y we take all the measurements and take the remaining ones at equispaced intervals.
We use structural similarity (SSIM) loss and RMSprop optimizer with α = 0.99 as this is the default in the fastMRI repository (Zbontar et al., 2018) and we found no improvement by replacing it with Adam. We do not put a limit on the amount of compute invested into training. We deploy an automated learning rate decay that starts at a learning rate of 10−3 and decays by a factor 0.1 if the validation SSIM has not improved by at least 10−4 for 5 epochs. Once the learning rate drops to 10−6 we stop the training after 10 additional epochs. Only for the largest training set sizes 25k,50k we found that an additional drop to 10−7 resulted in further performance gains. We use a batch size of 1.
For training set sizes up to 5k we train three models with random seeds and pick the best. For larger training set sizes we only run one seed, since the variance between runs decreased.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 4250 GPU hours for the experiments in Fig. 1 (c),(d).
A.3 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH THE
SWINIR
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a SwinIR presented in Fig.2 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 2 (a). Despite of the best SwinIR clearly outperforming the best U-Net in terms of PSNR
it is difficult for the naked eye to notice large differences in the quality of the reconstructions. This indicates that for Gaussian denoising both models already operate in a regime, where improvements to be made are only marginal.
Next, we describe the experimental details. To obtain Fig. 2 we train the SwinIR for color denoising from Liang et al. (2021) on the same training sets mentioned in Appx. A.1 with {0.1, 0.3, 1, 10, 100} thousand images from ImageNet. Instead of center cropping the training images to 256× 256 we have to crop to 128× 128 pixels as larger input patches would make it computational infeasible for us to train large versions of the SwinIR. The largest SwinIR alone took over 2 months to train on 4 NVIDIA A40 GPUs.
We train 4 different network sizes with {3.7, 11.5, 41.8, 128.9} million parameters. We denote the four network sizes as small(S)/middle(M)/large(L)/huge(H).
The training details and network configurations are as follows. The default SwinIR for denoising (Liang et al., 2021) was proposed for a training set size of about 10k images, 11.5M network parameters and was trained with a batch size of 8 for T =1280 epochs, where the learning rate is halved at [0.5T ,0.75T , 0.875T , 0.9375T ] epochs. We keep the learning rate schedule but adjust the maximal number of epochs T according to the training set size. Table 1 shows batch size and maximal number of epochs for every experiment in Fig. 2. We did not optimize over the choice of the batch size but picked the batch size as prescribed by the availability of computational resources.
See Liang et al. (2021) for a detailed description of the SwinIR network architecture. We vary the network size by adjusting the number of residual Swin Transformer blocks, the number of Swin Transformers per block, the number of attention heads per Swin Transformer, the number of channels in the input embedding and the width of the fully connected layers in a Swin Transformer. Table 2 contains a summary of the settings. When scaling up the network size, we invested in the parameters that seemed to be most promising in the ablation studies in Liang et al. (2021).
The experiments were conducted on four NVIDIA A40 and four NVIDIA RTX A6000. We used about 13000 GPU hours for the tranformer experiments in Fig. 2. For training the models in parallel on multiple GPUs we utilize the torch.distributed package with the glow backend, instead of the faster nccl backend, which was unfortunately not available on our hardware at that time.
B BENCHMARKING OUR MODELS FOR GAUSSIAN IMAGE DENOISING
In this Section, we evaluate the models we trained for image denoising in Section 3 on four common test sets form the literature and show that the largest SwinIR trained on the largest dataset achieves new SOTA for all four test sets and the considered noise level.
In Fig. 2 in the main body we compared the performance of the U-Nets and the SwinIRs trained on subsets of ImageNet for image denoising. The models are evaluated on a test set sampled from ImageNet. We observed that while SwinIRs significantly outperform U-Nets, the performance gain from increasing the training set size slows already at moderate training set sizes for both architectures equally. However, there is still a moderat performance gain in scaling the models, and thus we expect the largest SwinIR trained on the largest dataset to outperform the original SwinIR from Liang et al. (2021). Our results in this section show that this is indeed the case.
In Table 3 we evaluate on the standard test sets for Gaussian color image denoising CBSD68 (Martin et al., 2001), Kodak24 (Franzen, 1999), McMaster (Zhang et al., 2011) and Urban100 (Huang et al., 2015). We observe a significant performance difference between the best U-Net (46.5M parameters, 100k training images) and the other transformer based methods. As expected, our largest SwinIR trained on the largest dataset SwinIR 100/H outperforms the original SwinIR, but also the SCUnet (Zhang et al., 2022) a later SOTA model that has been demonstrated to outperform the original SwinIR.
We also depict the gains for the SwinIR from just scaling up the network size and then from scaling up network size and training set size. While this led to a new SOTA for Gaussian image denoising, note that on the downside training the SwinIR 10/M, which is comparable to the original SwinIR, took about 2 weeks on 4 NVIDIA A40 gpus, while training the SwinIR 100/H took over 2 months.
C EMPIRICAL SCALING LAWS FOR IMAGE SUPER-RESOLUTION WITH A U-NET
In this Section, we consider the problem of super-resolution, i.e., estimating an high-resolution image from a low-resolution version of the image. This can be viewed as a compressive sensing problem, since we can view the super-resolution problem as reconstructing a signal x ∈ Rn from a downsampled version y = Ax, where the matrix A ∈ Rm×n implements a downsampling operation like bicubic downsampling, or blurring followed by downsampling. As first shown in the pioneering work of Dong et al. (2014) data driven neural networks trained end-to-end outperform classical model-based approaches (Gu et al., 2012; Michaeli & Irani, 2013; Timofte et al., 2013).
Dong et al. (2014) reports that for super-resolution with a simple three-layer CNN, the gains from very large training sets do not seem to be as impressive as in high-level vision problems like image classification. Liang et al. (2021) plot the super-resolution performance of the SOTA SwinIR model as a function of the training set size up to 3600 training examples for a fixed network size. When plotting their results on a logarithmic scale, we observe that the performance improvement follows a power-law. It is unclear, however, whether this power law slows beyond this relatively small number of images.
In this Section, we obtain scaling laws for super-resolution for a U-Net over a wide range of training set and network sizes, similar as for denoising and compressive sensing in the main body.
Datasets. We use the same training, validation and test sets as described in Appx. A.1 with training set sizes N ∈ [100, 100k] images from ImageNet. On top we add two larger training sets of size 300k and 600k. Instead of center cropping the training images to 256× 256 we follow the super-resolution experiments in Liang et al. (2021) and train on images cropped to 128 × 128 pixels. We consider super-resolution of factor 2 so the low-resolution images have size 64 × 64. The low-resolution images are obtained with the bicubic downsampling function of Python’s PIL.Image package.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with only one block per encoder/decoder as this resulted in slightly better results than with two blocks. The network is trained end-to-end to map a coarse reconstruction, obtained through bicubic upsampling, to the residual between the coarse reconstruction and the high-resolution ground truth image.
We vary the number of the channels in the first layer in {8, 16, 32, 64, 128, 192, 256, 320, 448, 564}, which corresponds to {0.007, 0.025, 0.10, 0.40, 1.6, 3.6, 6.4, 10.0, 20.0, 31.2} million network parameters. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples.
We use the ℓ1-loss and Adam optimizer with its default settings. For all experiments we find a good initial learning rate with the same annealing strategy as described in Appx. A.1. However, instead of picking the largest learning rate for which the validation loss does not diverge, we pick the second largest, which leads to slightly more stable results. We start the annealing with learning rate of 10−5. In the few cases, where our heuristic leads to a degenerated training curve, typically due to picking a significantly too small or too large learning rate, starting the annealing with a smaller learning rate of 10−6 resolves the problem.
We do not put a limit on the amount of compute invested into training. To this end, we deploy an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 8 epochs. We stop the training once the validation loss did not improve for two consecutive learning rates. For training sets up to 10000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10.
For training set sizes up to 10000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1500 GPU hours for the experiments in Fig. 6.
Results and discussion. For each training set size, Fig. 6(a) shows the reconstruction performance in PSNR of the best model over all simulated network sizes in Fig. 6(b). Since the curves per training set size in Fig. 6(b) are relatively flat, further scaling up the network size is not expected to significantly improve the performance on the studied training sets. Here are the two main findings:
A linear power law with a scaling coefficient α = 0.0075 holds roughly up to training set sizes of about 30k images, and for training set sizes starting from 60k this slows to a linear power law with significantly smaller scaling coefficient α = 0.0029. With this slowed scaling law a training set size of 1.6B images would be required to increase performance by another 1dB (assuming the relation 32.05N0.0029 persists, which is likely to slow down even further).
While slowing already at a few tens of thousands of training images, the scaling laws for superresolution do not slow as early as those for denoising and compressive sensing (see Fig.1). This could be partially due training on image patches of size 128× 128 as opposed to size 256× 256 used for denoising.
D ADDITIONAL EMPIRICAL SCALING LAWS FOR GAUSSIAN DENOISING
In this Section, we extend our results for Gaussian denoising with a U-Net from Section 3 with two additional setups. Section D.1 considers fixing the noise sampled in each training epoch and Section D.2 investigates reducing the noise level from σz = 25 to σz = 15.
D.1 EMPIRICAL SCALING LAWS FOR DENOISING WITH FIXED NOISE
Our main results for denoising with a U-Net trained end-to-end discussed in Section 3 follow a setup in which the noise per training example is re-sampled in every training epoch. We choose this setup since it makes the best use of the available clean images. However, fixing the noise is also interesting since it is closer to a denoising setup in which the noise statistics are unknown, which is the case in some real-world noise removal problems. In such a problem, we would be given pairs of noisy and clean image, and could not synthesis new noisy images from the clean images.
In this section, we follow the same experimental setup from Appx. A.1 to simulate the performance of a U-Net for denoising with fixed noise for up to 100k training images (see Fig. 7). Compared to re-sampling the noise, we observe a drop in performance of about 0.3dB for small training set sizes. The performance difference at 10k images is reduced to 0.2dB resulting in a slightly steeper scaling law for moderate training set sizes. However, at around 10k training images the scaling of the performance of training with fixed noise also starts to flatten as it approaches the performance of re-sampling the noise during training. This indicates that if the noise statistics are unknown, more data is required to achieve the same performance as when they are known. However, in both cases the scaling with training set size slows already down at moderate training set sizes.
D.2 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER NOISE LEVEL
Our results for Gaussian denoising in Section 3 are for a fixed noise level of σz = 25. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with smaller noise level of σz = 15, in order to see how the scaling laws change. The results for both noise levels are depicted in Fig. 8.
We observe an improvement of about 2.3dB in PSNR, which is expected since the irreducible error decreases for smaller noise levels. We also observe that the scaling coefficient for the smaller noise level σz = 15 (i.e., α = 0.0026) is slightly steeper than that for the larger noise level σz = 25 (i.e., α = 0.0019). This coincides with the qualitative behavior of the curves for subspace denoising in Figure 3. Apart from that the curves are qualitatively similar, in that a initially steep power law is replaced by a slower one at around 6000 training images.
D.3 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER PATCH SIZE
Our results for Gaussian denoising in Section 3 with a U-Net were obtained for a constant training patch size of 256× 256 pixels across all network and training set sizes. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with a smaller patch size of 128× 128 pixels. The results for both patch sizes are depicted in Fig. 9. We observe that in the regime of large training set sizes, that we are primarily interested in, training onN patches of size 256×256 is more beneficial than training on 4N patches of size 128× 128. We therefore focus on patch size 256× 256 in the main body of this work.
E UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY - SUPPLEMENTARY RESULTS
In this section, we provide additional details on the statements in Section 5 on understanding scaling laws for denoising theoretically by studying a linear subspace denoising problem theoretically, and provide additional numerical results.
Recall that we consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error (normalized by the latent signal dimension d):
R(W) = 1 d E [ ∥Wy − x∥22 ] = 1
d ∥(W − I)U∥2F + σ2z d ∥W∥2F . (1)
Above, expectation is over the joint distribution of (x,y), and the second equality follows from using that x = Uc, where c ∼ N (0, I) is Gaussian, and y = x + z, where the noise z ∼ N (0, σ2zI) is Gaussian.
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk defined in equation (1)) is given by W∗ = 11+σ2z UU
T . This follows from taking the gradient of the risk (1), setting it to zero, and solving for W. The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
Early-stopped empirical risk minimization. We consider the estimator that applies gradient descent to the empirical risk
L(W) = N∑ i=1 ∥Wyi − xi∥22 = ∥WY −X∥ 2 F , (2)
where X,Y ∈ Rn×N contain the training examples as columns, and early-stops after k iterations for regularization.
We next discuss the early-stopped estimator Wk in more detail. Fig. 10 numerically demonstrates the regularizing effect of early stopping gradient descent, where W∞ = XY† is the converged learned estimator (see Appx. G, Eq. (5)). We see that regularization is necessary for this estimator to perform well.
We next discuss Theorem 2 and the associated assumptions in more detail. Theorem considers the following regime: (i) the number of training examples obeys (d + nσ2z) log(n) ≤ N and (ii) N ≤ ξd/σ2z , for an arbitrary ξ, and (iii) N log(N) ≤ n. For this regime, the theorem guarantees that the risk of the optimally early-stopped estimator obeys, with high probability,
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N .
(3) The theorem looks similar to that for the PCA estimate (Theorem 1), in that the risk is a constant away from the optimal risk, with an error term that becomes small as (d+ nσ2z)/N becomes small. However, the error bound does not converge to R(W∗) as the number of training examples, N , converges to infinity. This is probably an artifact of our analysis, but it is unclear, at least to us, how to derive a substantially tighter bound. In our analysis (see appendix G), we balance two errors: One error decreases in k and is associated with the part of the signal projected into the subspace, and the second error increases with k and is associated with the orthogonal complement of the subspace. We choose the early-stopping time to optimally balance those two terms, which yields the stated bound (3).
Now with regards to the assumption: Assumption (iii) N log(N) ≤ n means we are in the highdimensional regime; we think this is somewhat closer to reality (for example for denoising a 512×512 image, this would require the number of training examples to be smaller than 250k), but we can derive an analogous bound for the regime N log(N) ≥ n, where the number of training examples is larger than the ambient dimension.
Assumption (i) (d+ nσ2z) log(n) ≤ N is relatively mild, as it is necessary to being able to somewhat accurately estimate the subspace; this assumption is also required for the PCA estimate.
Assumption (ii) N ≤ ξd/σ2z , for an arbitrary ξ, is not restrictive in that ξ can be arbitrarily large, we make this assumption only so that the theorem can be stated in a convenient way. However, assumption (ii) reveals a shortcoming of Theorem 2 which is that we cannot make the bound go to zero as N → ∞, since increasing ξ increases one term in the bound, and decreases another one.
E.1 ADDITIONAL NUMERICAL SIMULATIONS
In this Section we provide further numerical simulations for the PCA subspace estimator and the estimator learned with early stopped gradient descent discussed in Section 5, Theorem 1 and Theorem2. Similar to Fig. 3, Fig. 11 shows the risks R(Wkopt) and R(WPCA) as a function of the number of training examples N for varying values of the signal and ambient dimension d and n, while fixing all other model parameters. In the power law region we fit linear power laws with negative scaling coefficients α. We observe steeper power laws (larger |α|) for smaller ambient dimensions n and larger signal dimensions d. Also the scaling coefficients of the learned estimator consistently excel the coefficients from the PCA estimator.
F PROOF FOR THEOREM 1: RISK BOUND FOR PCA SUBSPACE ESTIMATION
We provide a bound on the risk of the estimator f(y) = WPCAy with WPCA = τÛÛT and τ = 11+σ2z . Recall that Û ∈ Rn×d contains the singular vectors corresponding to the d-leading
singular values of YYT . We define Û⊥ ∈ Rn×n−d as the orthogonal complement of Û. Starting from the risk expression given in equation (1) we obtain
R(WPCA) = 1
d ∥∥∥(τÛÛT − I)U∥∥∥2 F + 1 d τ2σ2z ∥∥∥ÛÛT∥∥∥2 F
= 1
d ∥∥∥((τ − 1)ÛÛT + Û⊥ÛT⊥)U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2 ∥∥∥ÛTU∥∥∥2 F + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2
( d− ∥∥∥ÛT⊥U∥∥∥2 F ) + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= (1− (τ − 1)2)1 d ∥∥∥ÛT⊥U∥∥∥2 F + (τ − 1)2 + σ2zτ2 = 1 + 2σ2z (1 + σ2z) 2 1 d ∥∥∥ÛT⊥U∥∥∥2 F + σ2z 1 + σ2z
(i) ≤ 1 + 2σ 2 z
(1 + σ2z) 2 ∥∥∥ÛT⊥U∥∥∥2 + σ2z1 + σ2z (ii)
≤ c 1 + 2σ 2 z
(1 + σ2z) 2
(d+ nσ2z) log(2n)
N + σ2z 1 + σ2z
≤ c (d+ nσ 2 z) log(n)
N + σ2z 1 + σ2z . (4)
Here, inequality (ii) follows from Section H.1 equation (34) and holds in the regime (d + nσ2z) log(n) ≤ N , for some constant c and with probability at least 1 − n−10 − 3e−d + e−n. This concludes the proof.
G PROOF OF THEOREM 2: RISK BOUND FOR EARLY STOPPED EMPIRICAL RISK MINIMIZATION
This section contains the proof of Theorem 2. The theorem characterizes the performance of the learned, linear estimator Wk that is obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W) in equation (2). We start by deriving a closed form expression for the estimator Wk. The gradient of the loss is
∇WL(W) = (WY −X)YT ,
and thus the iterations of gradient descent are
Wk+1 = Wk − η(WkY −X)YT
= Wk(I− ηYYT ) + ηXYT .
Let Y = UyΣyVTy ∈ Rn×N and X = UxΣxVTx ∈ Rn×N be the singular value decompositions of Y and X respectively and assume that the singular values are non-zero and descending, i.e., σy,1 ≥ σy,2 ≥ . . . . We have, with W0 = 0 and k ≥ 1 that
Wk = ηXYT k−1∑ ℓ=0 (I− ηYYT )ℓ
= ηXVyΣyU T y ( k−1∑ ℓ=0 (I− ηUyΣ2yUTy )ℓ )
= ηXVyΣydiag ( k−1∑ ℓ=0 (1− ησ2y,i)ℓ ) UTy
= XVyDkU T y ,
where we defined Dk ∈ RN×N as a diagonal matrix with i-th diagonal entry given by (1 − (1 − ησ2y,i) k)/σy,i and where we used the geometric series to obtain
k−1∑ ℓ=0 (1− ησ2y,i)ℓ = 1− (1− ησ2y,i)k ησ2y,i .
Note that for k → ∞ and choosing η such that 1− ησ2y,i < 1 for all i we get
W∞ = XVyΣy −1UTy = XY †. (5)
Evaluating the risk from equation (1) at the estimator Wk gives
R(Wk) = 1
d ∥∥(XVyDkUTy − I)U∥∥2F + σ2zd ∥∥XVyDkUTy ∥∥2F . (6) To shorten notation we define
γ := (d+ nσ2z) log(n)
N (7)
ψ := nσ2z log(n)
N . (8)
We next provide bounds for the two terms on the right-hand-side of equation (6), proven later in this section.
Bound on the first term in equation (6): In Section G.1 we show that provided N log(N) ≤ n, 9d ≤ N and (d+ nσ2z) log(n) ≤ N , for some constant c, with probability at least 1− 2e−N/18 − 3n−10 − 3e−d − e−n − e−N − 2e−n/2, the following bound holds:
1
d ∥∥(XVyDkUTy − I)U∥∥2F ≤ cd (σ2zn+ γσ2x,max) d∑
i=1
1
σ2y,i (1− (1− ησ2y,i)k)2 +
c
d d∑ i=1 (1− ησ2y,i)2k
+ c
d ψγ N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2 + cγ. (9)
Bound on the second term in equation (6): In Section G.2 we show
σ2z d ∥∥XVyDkUTy ∥∥2F ≤ σ2zd N∑ i=1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (10)
With equations (9) and (10) in place and by splitting up the sum in (10) we can bound the right hand side of equation (6) as
R(Wk) ≤ 1 d
( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) d∑ i=1 1 σ2y,i (1− (1− ησ2y,i)k)2 + c d d∑ i=1 (1− ησ2y,i)2k
+ cγ + c
d
( ψγ + σ2z ) N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (11)
Since the singular values of Y are in descending order σy,1 ≥ σy,2 ≥ . . ., this is, for any iteration k, bounded as
R(Wk) ≤ ( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) 1 σ2y,d + c(1− ησ2y,d)2k
+ cγ + c
d
( ψγ + σ2z ) σ2x,max N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2. (12)
This is a good bound if we’re at an iteration sufficiently large so that (1− ησ2y,1)k is small.
In (12) we have (1− ησ2y,d)2k that is decreasing in the number of gradient descent steps k and also decreasing in the stepsize η as long as ησ2y,i ≤ 1 for i = 1, . . . , d. Further, we have (1−(1−ησ2y,i)k)2 that is increasing in k and also increasing in η. The first term corresponds to the signal that we want to fit sufficiently well, whereas the second term corresponds to the noise from which we want to fit as little as possible. Hence, there exist optimal choices for k, η that trade-off the sum of the two terms.
In our setup the d leading singular values of Y corresponding to the signal are large and concentrate around N(1 + σ2z), while the remaining singular values are small and concentrate around Nσ 2 z . Hence, we can apply a single step of gradient descent k = 1 to already fit a large portion of the signal, while minimizing the portion of the noise that is fitted. For that we choose the stepsize η as large as possible such that ησ2y,i ≤ 1 for i = 1, . . . , d still holds. Next, suppose the following events hold
E1 = {σ2y,d+1 ≤ N ( σ2z + ϵ(1 + σ 2 z) ) } (13)
E2 = {σ2y,d ≥ N(1 + σ2z)(1− ϵ)} (14) E3 = {σ2x,max ≤ 4N} (15) E4 = {σ2y,1 ≤ N(1 + ϵ)(1 + σ2z)}. (16)
In Section H.4, we show that
P [E3] ≥ 1− 2e−N/8. (17)
In Section H.4, we also show that, provided that N ≥ 3Cϵ−2(d+ σ2zn) log n, for some constant C and ϵ ∈ (0, 1),
P [E1] ,P [E2] ,P [E4] ≥ 1− e−d − e−n − n−9. (18)
We next bound the terms in equation (12). As discussed, we set k = 1 and the stepsize as large as possible, i.e. η = 1/ ( N(1 + ϵ)(1 + σ2z) ) ≤ 1/σ2y,1, which holds on event E4. Finally, on event E2 we obtain
c(1− ησ2y,d)2k ≤ c(1− ηN(1 + σ2z)(1− ϵ))2 = c ( 1− 1− ϵ
1 + ϵ
)2 ≤ cϵ2.
Next we bound the sum in equation (12). Towards this goal, we upper bound each term in the sum with its linear approximation at the origin. We compute the derivative at the origin as
lim q→0
∂
∂q
1 q (1− (1− ηq)k)2 = (ηk)2.
Thus, we have
N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2 ≤ N∑ i=d+1 k2η2σ2y,i ≤ Nk2η2σ2y,d+1 ≤ σ2z + ϵ+ ϵσ 2 z (1 + ϵ)2(1 + σ2z) 2 ≤ c(σ2z + 2ϵ),
on the event E1 and for σ2z ≤ 1, k = 1 and η = 1/ ( N(1 + ϵ)(1 + σ2z) ) . Putting this together and on the events E2, E3 we get the bound
R(Wk) ≤ 8 σ 2 z
1 + σ2z + cγ + c(1− ησ2y,d)2k +
c
d
( ψγ + σ2z ) N N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2
≤ 8 σ 2 z
1 + σ2z + cγ + cϵ2 +
cN
d
( σ2zn log n
N
(d+ nσ2z) log n
N + σ2z
) (σ2z + 2ϵ)
≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + cϵ2 + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
) (σ2z + 2ϵ).
(19)
The bound holds provided 3Cϵ−2(d+ σ2zn) log n ≤ N and N log(N) ≤ n, for some constants c, C and with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2. To maximize the benefit from early stopping we set ϵ as small as possible with respect to the condition 3Cϵ−2(d+ σ2zn) log n ≤ N
ϵ =
√ 3C(d+ σ2zn) log n
N . (20)
With that equation (19) becomes
R(Wk) ≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
)( σ2z + √ (d+ σ2zn) log n
N
)
= 8R(W | 1. What is the focus of the paper regarding deep learning-based methods for image reconstruction tasks?
2. What are the strengths and weaknesses of the paper, particularly in terms of experimental deep-learning studies?
3. Do you have any questions regarding the paper's assumptions and ablation studies?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies scaling of performance of deep learning-based methods for different image reconstruction tasks with respect to available training set sizes. Tasks investigated are U-net and SwinIR transformer for image denoising (and super-resolution in the appendix), as well as U-net for MRI compressive sensing. Experiments are pertinent and convincing. Discussions for the found power laws provide suitable insight. In addition, a theoretical study on linear subspace estimation from noisy data is provided, supporting the empirical observations by explaining/reproducing their general behavior in this simplified setting.
Strengths And Weaknesses
Strengths: The paper is very well written in an easy to read language. Contents are novel and interesting to the expected audience of the conference. Main findings are presented very clearly and supported by well-designed easy to digest figures, that would allow to grasp the main findings from the figures even without reading the text in depth.
New SOTA in some of the tasks come as a side product of scaling but are only given as a side remark in the main story.
Weaknesses: As with most experimental deep-learning studies the findings depend on hyperparameter settings that may have more or less dramatic effects on the results. The implicit assumption, that hyperparameter choices are in all cases in a suitable range cannot be supported without solid ablation studies. They are not given in the current paper and thus results presented need to be taken with a grain of salt.
A weakness of the paper is that experiments on image denoising do not vary noise levels. Thus, in the experiments it remains open, how the power laws change with different noise levels or models. This would be quite interesting in practice, as the selected noise level sigma = 25 is unusually high. It remains unclear (at least to this reviewer) how the found tipping points, where the steep power laws flatten considerably, change with noise level. This is quantitatively investigated in the theoretical part of the paper and shown in Figure 4. There it seems that lower noise levels correspond to smaller needed data set sizes. For image denoising this may be different, due to the typical signal distribution in Fourier space, where low-frequent structures have higher amplitudes than high-frequent ones and thus high noise levels ‘occlude’ high-frequent structures more than coarser ones. Lower noise levels would therefore allow to recover larger parts of the image signal and thus larger datasets may be needed before reaching the noise floor. Additional experiments would be needed to clarify this question.
Clarity, Quality, Novelty And Reproducibility
Clear and novel study of high quality but limited coverage. Should be reproducible from the details given in the text and especially the appendix. |
ICLR | Title
Scaling Laws For Deep Learning Based Image Reconstruction
Abstract
Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems. Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains. In this work, we study whether major performance gains are expected from scaling up the training set size. We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size. For all three tasks we find that an initially steep powerlaw scaling slows significantly already at moderate training set sizes. Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance. To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent. The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
1 INTRODUCTION
Deep neural networks trained to map a noisy measurement of an image or a noisy image to a clean image give state-of-the-art (SOTA) performance for image reconstruction problems. Examples are image denoising (Burger et al., 2012; Zhang et al., 2017; Brooks et al., 2019; Liang et al., 2021), super-resolution (Dong et al., 2016; Ledig et al., 2017; Liang et al., 2021), and compressive sensing for computed tomography (CT) (Jin et al., 2017), and accelerated magnetic resonance imaging (MRI) (Zbontar et al., 2018; Sriram et al., 2020; Muckley et al., 2021; Fabian & Soltanolkotabi, 2022).
The performance of a neural network for imaging is determined by the network architecture and optimization, the size of the network, and the size and quality of the training set.
Significant work has been invested in architecture development. For example, in the field of accelerated MRI, networks started out as convolutional neural networks (CNN) (Wang et al., 2016), which are now often used as building blocks in un-rolled variational networks (Hammernik et al., 2018; Sriram et al., 2020). Most recently, transformers have been adapted to image reconstruction (Lin & Heckel, 2022; Huang et al., 2022; Fabian & Soltanolkotabi, 2022).
However, it is not clear how substantial the latest improvements through architecture design are compared to potential improvements expected by scaling the training set and network size. Contrary to natural language processing (NLP) models and modern image classifiers that are trained on billions of examples, networks for image reconstruction are only trained on hundreds to thousands of example images. For example, the training set of SwinIR (Liang et al., 2021), the current SOTA for image denoising, contains only 10k images, and the popular benchmark dataset for accelerated MRI consists only of 35k images (Zbontar et al., 2018).
In this work, we study whether neural networks for image reconstruction only require moderate amounts of data to reach their peak performance, or whether major boosts are expected from increasing
the training set size. To partially address this question, we focus on three problems: image denoising, reconstruction from few and noisy measurements (compressive sensing) in the context of accelerated MRI, and super-resolution. We pick Gaussian denoising for its practical importance and since it can serve as a building block to solve more general image reconstruction problems well (Venkatakrishnan et al., 2013). We pick MR reconstruction because it is an important instance of a compressive sensing problem, and many problems can be formulated as compressive sensing problems, for example super-resolution and in-painting. In addition, for MR reconstruction the question on how much data is needed is particularly important, since it is expensive to collect medical data.
For the three problems we identify scaling laws that describe the reconstruction quality as a function of the training set size, while simultaneously scaling network sizes. Such scaling laws have been established for NLP and classification tasks, as discussed below, but not for image reconstruction.
The experiments are conducted with a U-Net (Ronneberger et al., 2015) and the SOTA SwinIR (Liang et al., 2021), a transformer architecture. We primarily consider the U-Net since it is widely used for image reconstruction and acts as a building block in SOTA models for image denoising (Brooks et al., 2019; Gurrola-Ramos et al., 2021; Zhang et al., 2021; 2022) and accelerated MRI (Zbontar et al., 2018; Sriram et al., 2020). We also present results for denoising with the SwinIR. The SwinIR outperforms the U-Net, but we find that its scaling with the number of training examples does not differ notably. Our contributions are as follows:
• Empirical scaling laws for denoising. We train U-Nets of sizes 0.1M to 46.5M parameters with training set sizes from 100 to 100k images from the ImageNet dataset (Russakovsky et al., 2015) for Gaussian denoising with 20.17dB Peak-Signal-to-Noise ratio (PSNR). While for the largest training set sizes and network sizes we consider, performance continues to increase, the rate of performance increase for training set sizes beyond a few thousand images slows to a level indicating that even training on millions of images only yields a marginal benefit, see Fig. 1(a).
We also train SOTA SwinIRs of sizes 4M to 129M parameters on the same range of training set sizes, see Fig. 2(a). Albeit it performs better than the U-Net, its scaling behavior is essentially equivalent, and again after a few thousand images, only marginal benefits are
expected. Scaling up its training set and network size did, however, give benefits, the largest model we trained yields new SOTA results on four common test sets by 0.05 to 0.22dB.
• Empirical scaling laws for compressive sensing. We train U-Nets of sizes from 2M to 500M parameters for 4x accelerated MRI with training set sizes from 50 to 50k images from the fastMRI dataset (Knoll et al., 2020b). We again find that beyond a dataset size of about 2.5k, the rate of improvement as a function of the training set size considerably slows, see Fig. 1(c). This indicates that while models are expected to improve by increasing the training set size, we expect that training on millions of images does not significantly improves performance.
• Empirical scaling laws for super-resolution. We also briefly study super-resolution, and similarly as for denoising and compressive sensing find a slowing of the scaling law at moderate dataset sizes. See appendix C.
• Understanding scaling laws for denoising theoretically. Our empirical results indicate that the denoising performance of a neural network trained end-to-end doesn’t increase as a function of training examples beyond a certain point. This is expected since once we reach a noise specific error floor, more data is not beneficial. We make this intuition precise for a linear estimator learned with early stopped gradient descent to denoise data drawn from a d-dimensional linear subspace. We show that the reconstruction error is upper bounded by d/N plus a noise-dependent error floor level. Once the error induced by learning the signal model, d/N , is small relative to the error floor, more training examples N are not beneficial.
Together, our empirical results show that neural networks for denoising, compressive sensing, and super-resolution applied in typical setups (i.e, Gaussian denoising with 20.17dB PSNR, and multi-coil accelerated MRI with 4x acceleration) already operate in a regime where the scaling laws for the training set size are slowing significantly and thus even very large increases of the training data are not expected to improve performance substantially. Even relatively modest model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
2 RELATED WORK
Scaling laws for prediction problems. Under the umbrella of statistical learning theory convergence rates of 1/N or 1/ √ N have been established for a range of relatively simple models and distributions, see e.g. Wainwright (2019) for an overview.
For deep neural networks used in practice, a recent line of work has empirically characterized the performance as a function of training set size and/or network size for classification and NLP (Hestness et al., 2017; Rosenfeld et al., 2019; Kaplan et al., 2020; Bahri et al., 2021; Zhai et al., 2021; Ghorbani et al., 2021; Bansal et al., 2022). In those domains, the scaling laws persist even for very large datasets, as described in more detail below. In contrast, for image reconstruction, we find that the power-law behavior already slows considerably at relatively small numbers of training examples.
Rosenfeld et al. (2019) find power-law scaling of performance with training set and network size across models and datasets for language modeling and classification. However, because the work fixes either training set or network size, while scaling the other, the scaling laws span only moderate ranges before saturating at a level determined by the fixed quantity and not the problem specific error floor. The papers Kaplan et al. (2020); Bahri et al. (2021); Zhai et al. (2021); Ghorbani et al. (2021); Bansal et al. (2022) including ours scale the dataset size and model size simultaneously, resulting in an improved predictive power of the obtained scaling laws.
Hestness et al. (2017) study models for language and image classification, and attribute deviations from a power-law curve to a lack of fine-tuning the hyperparameters of very large networks. For transformer language models Kaplan et al. (2020) find no deviation from a power-law for up to a training set and network size of 1B images and parameters. Further, Zhai et al. (2021) find the performance for Vision Transformers (Dosovitskiy et al., 2020) for few-shot image classification to deviate from a power-law curve only at extreme model sizes of 0.3-1B parameters and 3B images.
The role of training set size in inverse problems. For image reconstruction and inverse problems in general, we are not aware of work studying scaling laws in a principled manner, covering different
problems and model architectures. But, when proposing a new method several works study performance as a function of training set and/or network size. However, those studies typically only scale the parameter of interest while fixing all other parameters, unlike our work, which scales dataset size and network size together, which is important for identifying scaling laws. Below, we review the training set sizes, and if available their scaling properties, used by the recent SOTA in image denoising and accelerated MRI.
Zhang et al. (2017) report that for DnCNN, a standard CNN, using more than 400 distinct images with data augmentation only yields negligible improvements. Chen et al. (2021) pre-train an image processing transformer (IPT) of 115.5M network parameters on ImageNet (1.1M distinct images), and report the performance after fine-tuning to a specific task as a function of the size of the pretraining dataset. IPT’s performance for denoising is surpassed by the latest SOTA in form of the CNN based DRUnet (Chen et al., 2021), the transformer based SwinIR (Liang et al., 2021) and the Swin-Conv-Unet (Zhang et al., 2022) a combination of the two. Those models have significantly fewer network parameters and were trained on a training set consisting of only ∼10k images, leaving the role of training set size in image denoising open.
The number of available training images for accelerated MRI is limited to few publicly available datasets. Hence, current SOTA (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022) are trained on the largest datasets available (the fastMRI dataset), consisting of 35k images for knee and 70k for brain reconstruction (Zbontar et al., 2018; Muckley et al., 2021).
Adaptations of the MLP-Mixer (Tolstikhin et al., 2021) and the Vision Transformer (Dosovitskiy et al., 2020) to MRI (Mansour et al., 2022; Lin & Heckel, 2022) ablate their performance as a function of the training set size and find that non-CNN based approaches require more data to achieve a similar performance, but potentially also benefit more from even larger datasets. However, those experiments are run with fixed model sizes and only 3-4 realizations of training set size thus it is unclear when performance saturates.
3 EMPIRICAL SCALING LAWS FOR DENOISING
In this section, we consider the problem of estimating an image x from a noisy observation y = x+z, where z is additive Gaussian noise z ∼ N (0, σ2zI). Neural networks trained end-to-end to map a noisy observation to a clean image perform best, and outperform classical approaches like BM3D (Dabov et al., 2007) and non-local means (Buades et al., 2005) that are not based on training data.
Datasets. To enable studying learned denoisers over a wide range of training set sizes we work with the ImageNet dataset (Russakovsky et al., 2015). Its training set contains 1.3M images of 1000 different classes. We reserve 20 random classes for validation and testing. We design 10 training set subsets SN of sizes N ∈ [100, 100000] (see Fig. 1(a)) and with Si ⊆ Sj for i ≤ j. To make the distributions of images between the subsets as homogeneous as possible, the number of images from
different classes within a subset differ by at most one. We generate noisy images by adding Gaussian noise with variance of σz = 25 (pixels in the range from 0 to 255), i.e. 20.17dB in PSNR.
Model variants and training. We train a CNN-based U-Net, a detailed description of the model architecture is in Appx. A.1. We also train the Swin-Transformer (Liu et al., 2021) based SwinIR model (Liang et al., 2021) that achieves SOTA for a variety of image reconstruction tasks, including denoising. This model is interesting since transformers scale well with the number of training examples for other domains, e.g., for image classification pre-training (Dosovitskiy et al., 2020).
For U-Net and SwinIR we vary the number of network parameters as depicted in Fig. 1 (b) and Fig. 2 (b) respectively. Appx. A.1 and A.3 contain detailed descriptions on how the models are trained and the network size is adapted.
Results and discussion. Fig. 1(a) and Fig. 2(a) show the reconstruction performances of the best U-Net and SwinIR respectively over all considered network sizes. Our main findings are as follows.
A training set size of around 100 is already sufficient to train a decent image denoiser. Beyond 100, we can fit a linear power law to the performance of the U-Net with a scaling coefficient α = 0.0048 that approximately holds up to training set sizes of about 6k images. Beyond 6k, we can fit a second linear power law with significantly smaller scaling coefficient α = 0.0019.
Similarly, for the SwinIR we can fit a power law with coefficient α = 0.0050 in the regime of little training data and a power law with significantly smaller coefficient α = 0.0017 in the regime of moderate training data, thus the two architectures scale essentially equivalently.
While denoising benefits from more training examples, the drop in scaling coefficient indicates that scaling from thousands to millions of images is only expected to yield relatively marginal performance gains. While the gains are small and we expect gains from further scaling to be even smaller, our largest SwinIR trained on 100k images yields a new SOTA for Gaussian color image denoising. On four common test sets it achieves an improvement between 0.05 to 0.22dB, see Appx. B. Visualizations of reconstructions from models along the curves in Fig. 1(a) and Fig. 2(a) can be found in Fig. 4, Appx. A.
Comparing the U-Net scaling to SwinIR scaling (Fig. 2(a)) we see that the effect of large training sets on the performance of the U-Net can not make up for the improved modeling error that stems from the superior network architecture of the SwinIR. In fact, the simulated scaling coefficient α = 0.0019 predicts a required training set size of about 150M images for U-Net to achieve the best performance of the SwinIR, and that is an optimistic prediction since it assumes that the scaling does not slow down further for another 3 orders of magnitude. Thus, model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
An interesting question is whether different noise levels lead to different data requirements, as indicated by a different scaling behavior. We investigate the performance of U-Net for a smaller noise variance σz = 15 in Appx. D.2. While the results are still preliminary, the qualitative scaling behavior is similar; the difference are that the scaling law pertaining to smaller noise is slightly steeper, and overall the performance improves by about 1.3dB in PSNR.
Finally, note that for our results we choose to re-sample the noise per training image in every training epoch, since it makes the best use of the available clean images. However, fixing the noise is also interesting, since it is more similar to a setup in which the noise statistics are unknown like in real-world noise removal. See Appx. D.1 for additional results where the noise is fixed. We found that compared to re-sampling the noise the performance of a U-Net drops by 0.3dB for small training set sizes. As the training set size increases the performance approaches the one of re-sampling the noise resulting in a steeper scaling law that flattens at slightly larger training set sizes.
Robustness of our finding of slowing scaling laws. Our main finding for denoising is that the scaling laws slow at relatively small amounts of training data points (i.e., the scaling coefficient becomes very small). We next argue why we think that this is a robust finding.
In principle it could be that the networks are not sufficiently large for the performance to improve further as the dataset size increases. However, for the U-Net and training set sizes of 10k and 30k the
performance increase already slows, and for those training set sizes, increasing the network size does not improve performance, as shown in Fig. 1(b). For smaller network sizes the curves in Fig. 1(b) first increase and then decrease indicating that we found network sizes close to the optimum.
The parameter scaling of the SwinIR in Fig. 2(b) indicates that for training set sizes of 10k and 100k the performance still benefits from larger networks. Hence, the power law for those training set sizes in Fig. 2(a) can become slightly steeper once we further increase the network size. However, for the slope of the flat power law in (a) to reach the slope of the steep power law, the parameter scaling in (b) for 100k training images would need to hold for another 2 orders of magnitude, i.e. about 12B instead of the currently used 129M network parameters, which is very unlikely considering that the current network sizes already suffice to saturate the performance for small/moderate training set sizes.
Finally, it could be that with higher quality or higher diversity of the data, the denoising performance would further improve. In our experiments, we increase the training set sizes by adding more images from the same classes of ImageNet, and we continue to test on different classes. Thus, it could be that adding training examples from a different, more diverse data source leads to less of a slowing of the scaling law in Fig. 1(a). We test this hypothesis by scaling our training set with 3k images to 10k images not by adding more images from ImageNet but images from the datasets that are used to train the original SwinIR, i.e., we add all images from DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017) and BSD500 (Arbeláez et al., 2011) and add the remaining images from WED (Ma et al., 2017). Keeping all other hyperparameters fixed the U-Net obtains for this dataset a PSNR of 32.239dB, which is slightly worse than the 32.252dB it achieves with our training set with 10k images only from ImageNet. Hence, we reject the hypothesis that the drop in scaling coefficients can be explained with how the training sets are designed in our experiment.
4 EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING
We consider compressive sensing (CS) to achieve 4x accelerated MRI. MRI is an important medical imaging technique for its non-invasiveness and high accuracy. Due to the physics of the measurement process, MRI is inherently slow. Accelerated MRI aims at significantly reducing the scan time by undersampling the measurements. In addition, MRI scanners rely on parallel imaging, which uses multiple receiver coils to simultaneously collect different measurements of the same object.
This leads to the following reconstruction problem. We are given measurements yi ∈ Cm of the image x ∈ Cn as yi = MFSix+ noisei for i = 1, ..., C, and our goal is to reconstruct the image from those measurements. Here, C is the number of receiver coils, the diagonal matrices Si ∈ Cn×n are the sensitivity maps modelling the signal strength perceived by the i-th coil, F ∈ Cn×n is the discrete Fourier transform (DFT), and M ∈ Cm×n is a binary mask implementing the undersampling. Classical CS approaches (Lustig et al., 2008) first estimate the sensitivity maps with a method such as ESPIRiT (Uecker et al., 2014), and second estimate the unknown image by solving a regularized optimization problem, such as total-variation norm minimization. Recently, deep learning based methods have been shown to significantly outperform classical CS methods due to their ability to learn more complex and accurate signal models from data (Knoll et al., 2020a; Muckley et al., 2021).
Datasets. To explore the performance of learning based MR reconstruction over a wide range of training set sizes, we use the fastMRI multi-coil brain dataset (Knoll et al., 2020b), which is the largest publicly available dataset for MRI. The dataset consists of images of different contrasts. To ensure that the statistics of the dataset are as homogeneous as possible, we take the subset of the dataset corresponding to a single contrast resulting in 50k training images. We design training set subsets SN of size N ∈ [50, 50000] (see Fig. 1 (c)) with Si ⊆ Sj for i ≤ j. For more information on selection, division and subsampling of the dataset see Appx. A.2.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with 4 blocks per encoder/decoder. The network is trained end-to-end to map a coarse reconstruction x̂CR to the ground truth image x. The coarse reconstruction is obtained as
x̂CR = (∑C i=1 |F−1yi,ZF|2 )1/2
,where F−1 is the inverse DFT, and yi,ZF are the undersampled measurements of the i-th coil, where missing entries are filled with zeros. We vary the number of
network parameters as depicted in Fig. 1 (d). Appx. A.2 contains detailed descriptions on how the model is trained and the network size is adapted.
Results and discussion. For each training set size Fig. 1 (c) shows the reconstruction performance in structural similarity (SSIM) of the best model over all simulated network sizes. There are 2 main findings. First, we can fit a linear power law with a scaling coefficient α = 0.0072 that holds up to training set sizes of about 2.5k images. Second, for training set sizes starting from 5k we can fit a second linear power law but with significantly smaller scaling coefficient α = 0.0026.
Similar to denoising in Section 3 we conclude that while accelerated MRI slightly benefits form more training examples, the drop in scaling coefficient indicates that it is unlikely that even training set sizes of the order of hundreds of millions of images result in substantial gains. Visualizations of reconstructions from models along the curve in Fig. 1 (c) can be found in Fig. 5, Appx. A.2.
Robustness of our finding of slowing scaling laws. Next, we argue why the drop in scaling coefficient for accelerated MRI is a robust finding.
In Fig. 1 (d) we demonstrate that the network size does not bottleneck the performance of our largest training sets. For each training set size the performance as a function of the number of network parameters is relatively flat. Even small training sets benefit from large networks before their performance slightly decays for very large networks. Hence, we expect that for large training sets the performance as a function of network parameters would not increase significantly before decaying.
Finally, is there a different type of training data that would improve the scaling coefficient? We don’t think so, since all experiments are for the very specific task of reconstructing brain images of one particular contrast, which means that the examples that we add to increase the size of the training sets is already the data most suitable to solve this task.
5 UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY
We study a simple linear denoiser that is trained end-to-end to reconstruct a clean image from a noisy observation, in order to understand how the error as a function of the number of training examples for inverse problems is expected to look like.
We define a joint distribution over a signal and the corresponding measurement (x,y) as follows. Consider a d < n dimensional subspace of Rn parameterized by the orthonormal basis U ∈ Rn×d. We draw a signal approximately uniformly from the subspace x = Uc, where c ∼ N (0, I), and an associated noisy measurement as y = x+ z, where z ∼ N (0, σ2zI) is Gaussian noise. The subspace is unknown, but we are given a training set {(x1,y1), . . . , (xN ,yN )}, consisting of examples drawn iid from the joint distribution over (x,y).
We assume that the signal lies in a low dimensional subspace. Assuming that data lies in a lowdimensional subspace or more general, a union of low-dimensional subspaces, is common, for example it underlies the denoising of natural images via wavelet thresholding (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). Mohan et al. (2020) found that even deep learning based denoisers implicitly perform a projection onto an adaptively-selected low-dimensional subspace that captures the features of natural images.
We consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error, defined as R(W) = 1dE [ ∥Wy − x∥22 ] , where
expectation is over the joint distribution of (x,y).
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk R) is given by W∗ = 11+σ2z UU
T . The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
An estimator based on subspace estimation. The optimal estimator W∗ requires knowledge of the unknown subspace. We can learn the subspace from the noisy data by performing principal component analysis on Y = [y1, . . . ,yN ]. Specifically, we estimate the subspace as the d-leading
singular vectors of the empirical co-variance matrix YYT ∈ Rn×n, denoted by by Û ∈ Rn×d. If the measurement noise is zero (i.e., σ2z = 0), Û is an orthonormal basis for the d-dimensional subspace U provided we observe at least d many linearly independent signals, which occurs with probability one if we draw data according to the model defined above, and if the number of training examples obeys N ≥ d. There is a vast literature on PCA; see Vershynin (2011; 2018); Rudelson & Vershynin (2010); Tropp (2012) for tools from high-dimensional probability to analyze the PCA estimate.
Now consider the estimator WPCA = 11+σ2z ÛÛ Ty based on N noisy training points. The estimator assumes knowledge of the noise variance, but it is not difficult to estimate it relatively accurately. The following result, proven in Appx. F, characterizes the associated risk. Theorem 1. Suppose that the number of training examples obeys (d + nσ2z) log(n) ≤ N . For a numerical constant c, with probability at least 1− n−10 − 3e−d + e−n, the risk of the PCA-estimate is bounded by
R (WPCA) ≤ R(W∗) + c(d+ nσ2z) log(n)/N.
Thus, as long as the number of training examples N is sufficiently large relative to (d+ nσ2z), the risk of the PCA-estimator is close to the risk of the optimal estimator.
Estimator learned end-to-end. We now consider an estimator learned end-to-end, by applying gradient descent to the empirical risk L(W) = ∑N i=1 ∥Wyi − xi∥ 2 2 and we regularize via earlystopping the gradient descent iterations. The risk of the estimate after k iterations of gradient descent, Wk, is bounded by the next result. This estimator mimics the supervised training we consider throughout this section, with the difference that here we consider a simple neural network, and in the previous sections we trained a neural network. Theorem 2. Let Wk be the matrix obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W). Consider the regime where the number of training examples obeys (d + nσ2z) log(n) ≤ N ≤ ξd/σ2z for an arbitrary ξ and N log(N) ≤ n. For an appropriate choice of the stepsize η there exists an optimal early stopping time kopt at which the risk of the estimator fW(y) = Wkopty is upper-bounded with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2 by
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N ,
where c is a numerical constant.
The proof is provided in Appx. G. Similar to Thm. 1 the first term in the bound corresponds to the noise-dependent error floor and the second two terms decrease in (d + nσ2z)/N and represent the error induced by learning the estimator with a finite number of training examples. Once this error is small relative to the error floor, more training examples do not improve performance. See Appx. E for a more detailed discussion on the estimator and the assumptions of Thm. 2.
Discussion. In Fig. 3, we plot the risk as a function of the training set size for the early-stopped empirical risk minimization (ERM) and the PCA based estimator. Starting at a training set size of N = 100, we see that the risk minus the optimal risk follows approximately a power law, i.e., log(R(W)−R(W∗)) ≈ −α log(N), as suggested by Thms. 1 and 2. In practice, however, we don’t know the risk of the optimal estimator, R(W∗). Therefore, in the second row of Fig. 3 and throughout our empirical results we plot the risk log(R(W)) as a function of the number of training examples log(N) and distinguish different regions by fitting approximated scaling coefficients α to log(R(W)) ≈ −α log(N). In our theoretical example, we can identify three regions, coined in Hestness et al. (2017): The small data region in which the training set size does not suffice to learn a well-performing mapping and the power-law region in which the performance decays approximately as Nα with α < 0, and an problem-specific irreducible error region, which is R(W∗) here. Note that the power-law coefficient in the risk plot R(W) are smaller then those when plotting R(W)−R(W∗), an are only approximate scaling coefficients. Already in this highly simplified setup of an inverse problem the true scaling coefficients heavily depend on model parameters such as the noise variance as can be seen in Fig. 3. The scaling coefficients further vary with the signal dimension d and ambient dimension n (see Appx. E.1).
6 CONCLUSION
We found that the performance improvement of deep learning based image reconstruction as a function of the number of training examples slows already at moderate training set sizes, indicating that only marginal gains are expected beyond a few thousand examples.
Limitations. This finding is based on studying three different reconstruction problems (denoising, super-resolution, and compressive sensing), two architectures (U-net and SwinIR), and for each setup we extensively optimized hyperparameters and carefully scaled the networks. Scaling gave new state-of-the-art results on four denoising datasets, which provides some confidence in the setup.
Moreover, our statements necessarily pertain to the architectures and metrics we study. It is possible that other architectures or another scaling of architectures can yield larger improvements when scaling architectures. It is also widely acknowledged that image quality metrics (such as SSIM and PSNR) do not fully capture the perceived image quality by humans, and it could be that more training data yields improvements in image quality that are not captured well by SSIM and PSNR.
Most importantly, our findings pertain to standard in-distribution evaluation, i.e., the test images are from the same distribution as the training images. To achieve robustness against testing on images out of the training distribution recent work indicates a positive effect of larger and more diverse training sets (Miller et al., 2021; Darestani et al., 2022; Nguyen et al., 2022). Thus it is possible that the out-of-distribution performance of image reconstruction methods improves when trained on a larger and a more diverse dataset, even though the in-distribution performance does not improve further.
Future research. We focus on supervised image reconstruction methods. For image denoising and other image reconstruction problems self-supervised approaches are also performing very well (Laine et al., 2019; Wang et al., 2022; Zhou et al., 2022), even though supervised methods perform best if clean images are available. It would be very interesting to investigate the scaling laws for selfsupervised methods; perhaps more training examples are required to achieve the performance of a supervised setup. We leave this for future work.
REPRODUCIBILITY
The repository at https://github.com/MLI-lab/Scaling_Laws_For_Deep_ Learning_Based_Image_Reconstruction contains the code to reproduce all results in the main body of this paper.
ACKNOWLEDGMENTS
The authors acknowledge support by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456465471, 464123524, the DAAD, and the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
A DETAILS OF THE EXPERIMENTAL SETUPS
A.1 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a U-Net presented in Fig.1 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 1 (a). In these examples the improvement in perceived image quality from increasing the training set size from 100 to 1000 is larger than from increasing from 1000 to 10000 or from 10000 to 100000. This correlates with our quantitative findings in Fig. 1 (a).
Next, we describe the experimental details. We train U-Nets with two blocks in the encoder and decoder part respectively and skip connections between blocks. Each block consists of two convolutional layers with LeakyReLU activation and instance normalization (Ulyanov et al., 2017) after every layer, where the number of channels is doubled (halved) after every block in the encoder (decoder). The downsampling in the encoder is implemented as average pooling and the upsampling in the decoder as transposed convolutions. As proposed by Zhang et al. (2017) we train a residual denoiser that learns to predict y − x instead of directly predicting x, which improves performance. We scale the network size by increasing the width of the network by scaling the number of channels per (transposed) convolutional layer. For denoising we trained U-Nets of 7 different sizes. We vary the number of channels in the first layer in {16, 32, 64, 128, 192, 256, 320}, which corresponds to {0.1, 0.5, 1.9, 7.4, 16.7, 30.0, 46.5} million parameters. We do not vary the depth, since this would change the dimension of the informational bottleneck in the U-Net and thus change the model family itself.
The exact training set sizes we consider are {0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60, 100} thousand images from ImageNet. We center crop the training images to 256 × 256 pixels. We also tried a smaller patch size of 128× 128 pixels, but larger patches showed to have a better performance in the regime of large training set sizes, which is why the results presented in the main body are for patch size 256× 256. For a comparison between the scaling behavior of the two different patch sizes see Fig. 9, Appx. D.3.
We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. For validation and testing we use 80 and 300 images respectively taken from 20 random classes that are not used for training.
We use mean-squared-error loss and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999. For moderate training set sizes up to 3000 images we find that heuristically adjusting the initial learning rate with the help of an automated learning rate annealing performs well. To this end, we start with a learning rate of 10−4 and increase after every epoch by a factor of 2 until the validation loss does not improve for 3 consecutive epochs. We then load the model checkpoint from the learning rate that was still performing well and continue with that learning rate. We observe that this scheme typically picks an initial learning rate reduced by factor 2 for every increase in the number of channels C. For training set sizes larger than 3000 we directly apply this rule to pick an initial learning rate without annealing as we found this to give slightly better results. In particular, for number of channels C = {128, 192, 256, 320} we used initial learning rates η = {0.0032, 0.0016, 0.0008, 0.0004}. For small training sets up to 1000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10 and found that further increasing the batch size does not improve performance.
We do not put a limit on the amount of compute for training. We use an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 10 epochs or 6 epochs for training set sizes starting from 6000 images. Once the learning rate drops to 10−5 we observe near to no gains in validation loss and stop the training after 10 additional epochs.
For training set sizes up to 1000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1800 GPU hours for the experiments in Fig. 1(a),(b).
A.2 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for compressive sensing MRI presented in Fig.1 (c),(d) and Section 4.
In addition, Fig. 5 shows examples of reconstructions from different models along the performance curve in Fig. 1 (c). In these examples the improvement in perceived image quality from increasing the training set size from 500 to 2500 is larger than from increasing from 2500 to 10000 or from 10000 to 50000. This correlates with our quantitative findings in Fig. 1 (c).
The first in row in Fig. 5 shows an example in which all models including the one trained on the largest training set fail to recover a fine detail. This is a known problem in accelerated MRI and has been documented in both editions of the fast MRI challenge (Knoll et al., 2020a; Muckley et al., 2021) in which for all methods examples could be found in which fine details have not been recovered. However, the question remains if more training data or better models would help or the details are simply not there since the information is lost due to the large undersampling factors considered in the fastMRI challenges and in this work.
Next, we describe the experimental details. For compressed sensing in the context of accelerated MRI we trained U-Nets of 14 different sizes. We vary the number of channels in the first layer in {16, 32, 48, 64, 96, 112, 128, 144, 160, 176, 192, 208, 224, 256}, which corresponds to {2, 8, 18, 31, 70, 95, 124, 157, 193, 234, 279, 327, 380, 496} million network parameters. The exact training set sizes we consider are {0.05, 0.25, 0.5, 1, 2.5, 5, 10, 25, 50} thousand AXT2 weighted images from fastMRI multi-coil brain dataset (Zbontar et al., 2018), where AXT2 corresponds to all images of one type of contrast. We focused on images only from this type to make the statistics of our datasets as homogeneous as possible. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. We use 4732 and 730 additional images for testing and validation.
We consider an acceleration factor of 4 meaning that we only measure 25% of the information. We obtain the 4 times undersampled measurements by masking the fully sampled measurement y with an equispaced mask with 8% center fractions meaning that in the center of y we take all the measurements and take the remaining ones at equispaced intervals.
We use structural similarity (SSIM) loss and RMSprop optimizer with α = 0.99 as this is the default in the fastMRI repository (Zbontar et al., 2018) and we found no improvement by replacing it with Adam. We do not put a limit on the amount of compute invested into training. We deploy an automated learning rate decay that starts at a learning rate of 10−3 and decays by a factor 0.1 if the validation SSIM has not improved by at least 10−4 for 5 epochs. Once the learning rate drops to 10−6 we stop the training after 10 additional epochs. Only for the largest training set sizes 25k,50k we found that an additional drop to 10−7 resulted in further performance gains. We use a batch size of 1.
For training set sizes up to 5k we train three models with random seeds and pick the best. For larger training set sizes we only run one seed, since the variance between runs decreased.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 4250 GPU hours for the experiments in Fig. 1 (c),(d).
A.3 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH THE
SWINIR
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a SwinIR presented in Fig.2 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 2 (a). Despite of the best SwinIR clearly outperforming the best U-Net in terms of PSNR
it is difficult for the naked eye to notice large differences in the quality of the reconstructions. This indicates that for Gaussian denoising both models already operate in a regime, where improvements to be made are only marginal.
Next, we describe the experimental details. To obtain Fig. 2 we train the SwinIR for color denoising from Liang et al. (2021) on the same training sets mentioned in Appx. A.1 with {0.1, 0.3, 1, 10, 100} thousand images from ImageNet. Instead of center cropping the training images to 256× 256 we have to crop to 128× 128 pixels as larger input patches would make it computational infeasible for us to train large versions of the SwinIR. The largest SwinIR alone took over 2 months to train on 4 NVIDIA A40 GPUs.
We train 4 different network sizes with {3.7, 11.5, 41.8, 128.9} million parameters. We denote the four network sizes as small(S)/middle(M)/large(L)/huge(H).
The training details and network configurations are as follows. The default SwinIR for denoising (Liang et al., 2021) was proposed for a training set size of about 10k images, 11.5M network parameters and was trained with a batch size of 8 for T =1280 epochs, where the learning rate is halved at [0.5T ,0.75T , 0.875T , 0.9375T ] epochs. We keep the learning rate schedule but adjust the maximal number of epochs T according to the training set size. Table 1 shows batch size and maximal number of epochs for every experiment in Fig. 2. We did not optimize over the choice of the batch size but picked the batch size as prescribed by the availability of computational resources.
See Liang et al. (2021) for a detailed description of the SwinIR network architecture. We vary the network size by adjusting the number of residual Swin Transformer blocks, the number of Swin Transformers per block, the number of attention heads per Swin Transformer, the number of channels in the input embedding and the width of the fully connected layers in a Swin Transformer. Table 2 contains a summary of the settings. When scaling up the network size, we invested in the parameters that seemed to be most promising in the ablation studies in Liang et al. (2021).
The experiments were conducted on four NVIDIA A40 and four NVIDIA RTX A6000. We used about 13000 GPU hours for the tranformer experiments in Fig. 2. For training the models in parallel on multiple GPUs we utilize the torch.distributed package with the glow backend, instead of the faster nccl backend, which was unfortunately not available on our hardware at that time.
B BENCHMARKING OUR MODELS FOR GAUSSIAN IMAGE DENOISING
In this Section, we evaluate the models we trained for image denoising in Section 3 on four common test sets form the literature and show that the largest SwinIR trained on the largest dataset achieves new SOTA for all four test sets and the considered noise level.
In Fig. 2 in the main body we compared the performance of the U-Nets and the SwinIRs trained on subsets of ImageNet for image denoising. The models are evaluated on a test set sampled from ImageNet. We observed that while SwinIRs significantly outperform U-Nets, the performance gain from increasing the training set size slows already at moderate training set sizes for both architectures equally. However, there is still a moderat performance gain in scaling the models, and thus we expect the largest SwinIR trained on the largest dataset to outperform the original SwinIR from Liang et al. (2021). Our results in this section show that this is indeed the case.
In Table 3 we evaluate on the standard test sets for Gaussian color image denoising CBSD68 (Martin et al., 2001), Kodak24 (Franzen, 1999), McMaster (Zhang et al., 2011) and Urban100 (Huang et al., 2015). We observe a significant performance difference between the best U-Net (46.5M parameters, 100k training images) and the other transformer based methods. As expected, our largest SwinIR trained on the largest dataset SwinIR 100/H outperforms the original SwinIR, but also the SCUnet (Zhang et al., 2022) a later SOTA model that has been demonstrated to outperform the original SwinIR.
We also depict the gains for the SwinIR from just scaling up the network size and then from scaling up network size and training set size. While this led to a new SOTA for Gaussian image denoising, note that on the downside training the SwinIR 10/M, which is comparable to the original SwinIR, took about 2 weeks on 4 NVIDIA A40 gpus, while training the SwinIR 100/H took over 2 months.
C EMPIRICAL SCALING LAWS FOR IMAGE SUPER-RESOLUTION WITH A U-NET
In this Section, we consider the problem of super-resolution, i.e., estimating an high-resolution image from a low-resolution version of the image. This can be viewed as a compressive sensing problem, since we can view the super-resolution problem as reconstructing a signal x ∈ Rn from a downsampled version y = Ax, where the matrix A ∈ Rm×n implements a downsampling operation like bicubic downsampling, or blurring followed by downsampling. As first shown in the pioneering work of Dong et al. (2014) data driven neural networks trained end-to-end outperform classical model-based approaches (Gu et al., 2012; Michaeli & Irani, 2013; Timofte et al., 2013).
Dong et al. (2014) reports that for super-resolution with a simple three-layer CNN, the gains from very large training sets do not seem to be as impressive as in high-level vision problems like image classification. Liang et al. (2021) plot the super-resolution performance of the SOTA SwinIR model as a function of the training set size up to 3600 training examples for a fixed network size. When plotting their results on a logarithmic scale, we observe that the performance improvement follows a power-law. It is unclear, however, whether this power law slows beyond this relatively small number of images.
In this Section, we obtain scaling laws for super-resolution for a U-Net over a wide range of training set and network sizes, similar as for denoising and compressive sensing in the main body.
Datasets. We use the same training, validation and test sets as described in Appx. A.1 with training set sizes N ∈ [100, 100k] images from ImageNet. On top we add two larger training sets of size 300k and 600k. Instead of center cropping the training images to 256× 256 we follow the super-resolution experiments in Liang et al. (2021) and train on images cropped to 128 × 128 pixels. We consider super-resolution of factor 2 so the low-resolution images have size 64 × 64. The low-resolution images are obtained with the bicubic downsampling function of Python’s PIL.Image package.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with only one block per encoder/decoder as this resulted in slightly better results than with two blocks. The network is trained end-to-end to map a coarse reconstruction, obtained through bicubic upsampling, to the residual between the coarse reconstruction and the high-resolution ground truth image.
We vary the number of the channels in the first layer in {8, 16, 32, 64, 128, 192, 256, 320, 448, 564}, which corresponds to {0.007, 0.025, 0.10, 0.40, 1.6, 3.6, 6.4, 10.0, 20.0, 31.2} million network parameters. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples.
We use the ℓ1-loss and Adam optimizer with its default settings. For all experiments we find a good initial learning rate with the same annealing strategy as described in Appx. A.1. However, instead of picking the largest learning rate for which the validation loss does not diverge, we pick the second largest, which leads to slightly more stable results. We start the annealing with learning rate of 10−5. In the few cases, where our heuristic leads to a degenerated training curve, typically due to picking a significantly too small or too large learning rate, starting the annealing with a smaller learning rate of 10−6 resolves the problem.
We do not put a limit on the amount of compute invested into training. To this end, we deploy an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 8 epochs. We stop the training once the validation loss did not improve for two consecutive learning rates. For training sets up to 10000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10.
For training set sizes up to 10000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1500 GPU hours for the experiments in Fig. 6.
Results and discussion. For each training set size, Fig. 6(a) shows the reconstruction performance in PSNR of the best model over all simulated network sizes in Fig. 6(b). Since the curves per training set size in Fig. 6(b) are relatively flat, further scaling up the network size is not expected to significantly improve the performance on the studied training sets. Here are the two main findings:
A linear power law with a scaling coefficient α = 0.0075 holds roughly up to training set sizes of about 30k images, and for training set sizes starting from 60k this slows to a linear power law with significantly smaller scaling coefficient α = 0.0029. With this slowed scaling law a training set size of 1.6B images would be required to increase performance by another 1dB (assuming the relation 32.05N0.0029 persists, which is likely to slow down even further).
While slowing already at a few tens of thousands of training images, the scaling laws for superresolution do not slow as early as those for denoising and compressive sensing (see Fig.1). This could be partially due training on image patches of size 128× 128 as opposed to size 256× 256 used for denoising.
D ADDITIONAL EMPIRICAL SCALING LAWS FOR GAUSSIAN DENOISING
In this Section, we extend our results for Gaussian denoising with a U-Net from Section 3 with two additional setups. Section D.1 considers fixing the noise sampled in each training epoch and Section D.2 investigates reducing the noise level from σz = 25 to σz = 15.
D.1 EMPIRICAL SCALING LAWS FOR DENOISING WITH FIXED NOISE
Our main results for denoising with a U-Net trained end-to-end discussed in Section 3 follow a setup in which the noise per training example is re-sampled in every training epoch. We choose this setup since it makes the best use of the available clean images. However, fixing the noise is also interesting since it is closer to a denoising setup in which the noise statistics are unknown, which is the case in some real-world noise removal problems. In such a problem, we would be given pairs of noisy and clean image, and could not synthesis new noisy images from the clean images.
In this section, we follow the same experimental setup from Appx. A.1 to simulate the performance of a U-Net for denoising with fixed noise for up to 100k training images (see Fig. 7). Compared to re-sampling the noise, we observe a drop in performance of about 0.3dB for small training set sizes. The performance difference at 10k images is reduced to 0.2dB resulting in a slightly steeper scaling law for moderate training set sizes. However, at around 10k training images the scaling of the performance of training with fixed noise also starts to flatten as it approaches the performance of re-sampling the noise during training. This indicates that if the noise statistics are unknown, more data is required to achieve the same performance as when they are known. However, in both cases the scaling with training set size slows already down at moderate training set sizes.
D.2 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER NOISE LEVEL
Our results for Gaussian denoising in Section 3 are for a fixed noise level of σz = 25. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with smaller noise level of σz = 15, in order to see how the scaling laws change. The results for both noise levels are depicted in Fig. 8.
We observe an improvement of about 2.3dB in PSNR, which is expected since the irreducible error decreases for smaller noise levels. We also observe that the scaling coefficient for the smaller noise level σz = 15 (i.e., α = 0.0026) is slightly steeper than that for the larger noise level σz = 25 (i.e., α = 0.0019). This coincides with the qualitative behavior of the curves for subspace denoising in Figure 3. Apart from that the curves are qualitatively similar, in that a initially steep power law is replaced by a slower one at around 6000 training images.
D.3 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER PATCH SIZE
Our results for Gaussian denoising in Section 3 with a U-Net were obtained for a constant training patch size of 256× 256 pixels across all network and training set sizes. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with a smaller patch size of 128× 128 pixels. The results for both patch sizes are depicted in Fig. 9. We observe that in the regime of large training set sizes, that we are primarily interested in, training onN patches of size 256×256 is more beneficial than training on 4N patches of size 128× 128. We therefore focus on patch size 256× 256 in the main body of this work.
E UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY - SUPPLEMENTARY RESULTS
In this section, we provide additional details on the statements in Section 5 on understanding scaling laws for denoising theoretically by studying a linear subspace denoising problem theoretically, and provide additional numerical results.
Recall that we consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error (normalized by the latent signal dimension d):
R(W) = 1 d E [ ∥Wy − x∥22 ] = 1
d ∥(W − I)U∥2F + σ2z d ∥W∥2F . (1)
Above, expectation is over the joint distribution of (x,y), and the second equality follows from using that x = Uc, where c ∼ N (0, I) is Gaussian, and y = x + z, where the noise z ∼ N (0, σ2zI) is Gaussian.
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk defined in equation (1)) is given by W∗ = 11+σ2z UU
T . This follows from taking the gradient of the risk (1), setting it to zero, and solving for W. The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
Early-stopped empirical risk minimization. We consider the estimator that applies gradient descent to the empirical risk
L(W) = N∑ i=1 ∥Wyi − xi∥22 = ∥WY −X∥ 2 F , (2)
where X,Y ∈ Rn×N contain the training examples as columns, and early-stops after k iterations for regularization.
We next discuss the early-stopped estimator Wk in more detail. Fig. 10 numerically demonstrates the regularizing effect of early stopping gradient descent, where W∞ = XY† is the converged learned estimator (see Appx. G, Eq. (5)). We see that regularization is necessary for this estimator to perform well.
We next discuss Theorem 2 and the associated assumptions in more detail. Theorem considers the following regime: (i) the number of training examples obeys (d + nσ2z) log(n) ≤ N and (ii) N ≤ ξd/σ2z , for an arbitrary ξ, and (iii) N log(N) ≤ n. For this regime, the theorem guarantees that the risk of the optimally early-stopped estimator obeys, with high probability,
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N .
(3) The theorem looks similar to that for the PCA estimate (Theorem 1), in that the risk is a constant away from the optimal risk, with an error term that becomes small as (d+ nσ2z)/N becomes small. However, the error bound does not converge to R(W∗) as the number of training examples, N , converges to infinity. This is probably an artifact of our analysis, but it is unclear, at least to us, how to derive a substantially tighter bound. In our analysis (see appendix G), we balance two errors: One error decreases in k and is associated with the part of the signal projected into the subspace, and the second error increases with k and is associated with the orthogonal complement of the subspace. We choose the early-stopping time to optimally balance those two terms, which yields the stated bound (3).
Now with regards to the assumption: Assumption (iii) N log(N) ≤ n means we are in the highdimensional regime; we think this is somewhat closer to reality (for example for denoising a 512×512 image, this would require the number of training examples to be smaller than 250k), but we can derive an analogous bound for the regime N log(N) ≥ n, where the number of training examples is larger than the ambient dimension.
Assumption (i) (d+ nσ2z) log(n) ≤ N is relatively mild, as it is necessary to being able to somewhat accurately estimate the subspace; this assumption is also required for the PCA estimate.
Assumption (ii) N ≤ ξd/σ2z , for an arbitrary ξ, is not restrictive in that ξ can be arbitrarily large, we make this assumption only so that the theorem can be stated in a convenient way. However, assumption (ii) reveals a shortcoming of Theorem 2 which is that we cannot make the bound go to zero as N → ∞, since increasing ξ increases one term in the bound, and decreases another one.
E.1 ADDITIONAL NUMERICAL SIMULATIONS
In this Section we provide further numerical simulations for the PCA subspace estimator and the estimator learned with early stopped gradient descent discussed in Section 5, Theorem 1 and Theorem2. Similar to Fig. 3, Fig. 11 shows the risks R(Wkopt) and R(WPCA) as a function of the number of training examples N for varying values of the signal and ambient dimension d and n, while fixing all other model parameters. In the power law region we fit linear power laws with negative scaling coefficients α. We observe steeper power laws (larger |α|) for smaller ambient dimensions n and larger signal dimensions d. Also the scaling coefficients of the learned estimator consistently excel the coefficients from the PCA estimator.
F PROOF FOR THEOREM 1: RISK BOUND FOR PCA SUBSPACE ESTIMATION
We provide a bound on the risk of the estimator f(y) = WPCAy with WPCA = τÛÛT and τ = 11+σ2z . Recall that Û ∈ Rn×d contains the singular vectors corresponding to the d-leading
singular values of YYT . We define Û⊥ ∈ Rn×n−d as the orthogonal complement of Û. Starting from the risk expression given in equation (1) we obtain
R(WPCA) = 1
d ∥∥∥(τÛÛT − I)U∥∥∥2 F + 1 d τ2σ2z ∥∥∥ÛÛT∥∥∥2 F
= 1
d ∥∥∥((τ − 1)ÛÛT + Û⊥ÛT⊥)U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2 ∥∥∥ÛTU∥∥∥2 F + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2
( d− ∥∥∥ÛT⊥U∥∥∥2 F ) + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= (1− (τ − 1)2)1 d ∥∥∥ÛT⊥U∥∥∥2 F + (τ − 1)2 + σ2zτ2 = 1 + 2σ2z (1 + σ2z) 2 1 d ∥∥∥ÛT⊥U∥∥∥2 F + σ2z 1 + σ2z
(i) ≤ 1 + 2σ 2 z
(1 + σ2z) 2 ∥∥∥ÛT⊥U∥∥∥2 + σ2z1 + σ2z (ii)
≤ c 1 + 2σ 2 z
(1 + σ2z) 2
(d+ nσ2z) log(2n)
N + σ2z 1 + σ2z
≤ c (d+ nσ 2 z) log(n)
N + σ2z 1 + σ2z . (4)
Here, inequality (ii) follows from Section H.1 equation (34) and holds in the regime (d + nσ2z) log(n) ≤ N , for some constant c and with probability at least 1 − n−10 − 3e−d + e−n. This concludes the proof.
G PROOF OF THEOREM 2: RISK BOUND FOR EARLY STOPPED EMPIRICAL RISK MINIMIZATION
This section contains the proof of Theorem 2. The theorem characterizes the performance of the learned, linear estimator Wk that is obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W) in equation (2). We start by deriving a closed form expression for the estimator Wk. The gradient of the loss is
∇WL(W) = (WY −X)YT ,
and thus the iterations of gradient descent are
Wk+1 = Wk − η(WkY −X)YT
= Wk(I− ηYYT ) + ηXYT .
Let Y = UyΣyVTy ∈ Rn×N and X = UxΣxVTx ∈ Rn×N be the singular value decompositions of Y and X respectively and assume that the singular values are non-zero and descending, i.e., σy,1 ≥ σy,2 ≥ . . . . We have, with W0 = 0 and k ≥ 1 that
Wk = ηXYT k−1∑ ℓ=0 (I− ηYYT )ℓ
= ηXVyΣyU T y ( k−1∑ ℓ=0 (I− ηUyΣ2yUTy )ℓ )
= ηXVyΣydiag ( k−1∑ ℓ=0 (1− ησ2y,i)ℓ ) UTy
= XVyDkU T y ,
where we defined Dk ∈ RN×N as a diagonal matrix with i-th diagonal entry given by (1 − (1 − ησ2y,i) k)/σy,i and where we used the geometric series to obtain
k−1∑ ℓ=0 (1− ησ2y,i)ℓ = 1− (1− ησ2y,i)k ησ2y,i .
Note that for k → ∞ and choosing η such that 1− ησ2y,i < 1 for all i we get
W∞ = XVyΣy −1UTy = XY †. (5)
Evaluating the risk from equation (1) at the estimator Wk gives
R(Wk) = 1
d ∥∥(XVyDkUTy − I)U∥∥2F + σ2zd ∥∥XVyDkUTy ∥∥2F . (6) To shorten notation we define
γ := (d+ nσ2z) log(n)
N (7)
ψ := nσ2z log(n)
N . (8)
We next provide bounds for the two terms on the right-hand-side of equation (6), proven later in this section.
Bound on the first term in equation (6): In Section G.1 we show that provided N log(N) ≤ n, 9d ≤ N and (d+ nσ2z) log(n) ≤ N , for some constant c, with probability at least 1− 2e−N/18 − 3n−10 − 3e−d − e−n − e−N − 2e−n/2, the following bound holds:
1
d ∥∥(XVyDkUTy − I)U∥∥2F ≤ cd (σ2zn+ γσ2x,max) d∑
i=1
1
σ2y,i (1− (1− ησ2y,i)k)2 +
c
d d∑ i=1 (1− ησ2y,i)2k
+ c
d ψγ N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2 + cγ. (9)
Bound on the second term in equation (6): In Section G.2 we show
σ2z d ∥∥XVyDkUTy ∥∥2F ≤ σ2zd N∑ i=1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (10)
With equations (9) and (10) in place and by splitting up the sum in (10) we can bound the right hand side of equation (6) as
R(Wk) ≤ 1 d
( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) d∑ i=1 1 σ2y,i (1− (1− ησ2y,i)k)2 + c d d∑ i=1 (1− ησ2y,i)2k
+ cγ + c
d
( ψγ + σ2z ) N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (11)
Since the singular values of Y are in descending order σy,1 ≥ σy,2 ≥ . . ., this is, for any iteration k, bounded as
R(Wk) ≤ ( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) 1 σ2y,d + c(1− ησ2y,d)2k
+ cγ + c
d
( ψγ + σ2z ) σ2x,max N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2. (12)
This is a good bound if we’re at an iteration sufficiently large so that (1− ησ2y,1)k is small.
In (12) we have (1− ησ2y,d)2k that is decreasing in the number of gradient descent steps k and also decreasing in the stepsize η as long as ησ2y,i ≤ 1 for i = 1, . . . , d. Further, we have (1−(1−ησ2y,i)k)2 that is increasing in k and also increasing in η. The first term corresponds to the signal that we want to fit sufficiently well, whereas the second term corresponds to the noise from which we want to fit as little as possible. Hence, there exist optimal choices for k, η that trade-off the sum of the two terms.
In our setup the d leading singular values of Y corresponding to the signal are large and concentrate around N(1 + σ2z), while the remaining singular values are small and concentrate around Nσ 2 z . Hence, we can apply a single step of gradient descent k = 1 to already fit a large portion of the signal, while minimizing the portion of the noise that is fitted. For that we choose the stepsize η as large as possible such that ησ2y,i ≤ 1 for i = 1, . . . , d still holds. Next, suppose the following events hold
E1 = {σ2y,d+1 ≤ N ( σ2z + ϵ(1 + σ 2 z) ) } (13)
E2 = {σ2y,d ≥ N(1 + σ2z)(1− ϵ)} (14) E3 = {σ2x,max ≤ 4N} (15) E4 = {σ2y,1 ≤ N(1 + ϵ)(1 + σ2z)}. (16)
In Section H.4, we show that
P [E3] ≥ 1− 2e−N/8. (17)
In Section H.4, we also show that, provided that N ≥ 3Cϵ−2(d+ σ2zn) log n, for some constant C and ϵ ∈ (0, 1),
P [E1] ,P [E2] ,P [E4] ≥ 1− e−d − e−n − n−9. (18)
We next bound the terms in equation (12). As discussed, we set k = 1 and the stepsize as large as possible, i.e. η = 1/ ( N(1 + ϵ)(1 + σ2z) ) ≤ 1/σ2y,1, which holds on event E4. Finally, on event E2 we obtain
c(1− ησ2y,d)2k ≤ c(1− ηN(1 + σ2z)(1− ϵ))2 = c ( 1− 1− ϵ
1 + ϵ
)2 ≤ cϵ2.
Next we bound the sum in equation (12). Towards this goal, we upper bound each term in the sum with its linear approximation at the origin. We compute the derivative at the origin as
lim q→0
∂
∂q
1 q (1− (1− ηq)k)2 = (ηk)2.
Thus, we have
N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2 ≤ N∑ i=d+1 k2η2σ2y,i ≤ Nk2η2σ2y,d+1 ≤ σ2z + ϵ+ ϵσ 2 z (1 + ϵ)2(1 + σ2z) 2 ≤ c(σ2z + 2ϵ),
on the event E1 and for σ2z ≤ 1, k = 1 and η = 1/ ( N(1 + ϵ)(1 + σ2z) ) . Putting this together and on the events E2, E3 we get the bound
R(Wk) ≤ 8 σ 2 z
1 + σ2z + cγ + c(1− ησ2y,d)2k +
c
d
( ψγ + σ2z ) N N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2
≤ 8 σ 2 z
1 + σ2z + cγ + cϵ2 +
cN
d
( σ2zn log n
N
(d+ nσ2z) log n
N + σ2z
) (σ2z + 2ϵ)
≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + cϵ2 + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
) (σ2z + 2ϵ).
(19)
The bound holds provided 3Cϵ−2(d+ σ2zn) log n ≤ N and N log(N) ≤ n, for some constants c, C and with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2. To maximize the benefit from early stopping we set ϵ as small as possible with respect to the condition 3Cϵ−2(d+ σ2zn) log n ≤ N
ϵ =
√ 3C(d+ σ2zn) log n
N . (20)
With that equation (19) becomes
R(Wk) ≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
)( σ2z + √ (d+ σ2zn) log n
N
)
= 8R(W | 1. What is the focus of the paper regarding image reconstruction?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its empirical and theoretical aspects?
3. Do you have any concerns about the literature review and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper conducts an empirical study of how the image reconstruction accuracy changes with the number of training observations for low-level tasks such as image denoising and compressed sensing. For that, multiple models with different numbers of parameters are trained for each training set, and the best test accuracy is considered for the final plot. The plot reveals that the accuracy has an initial zone of high improvement after which a zone with more modest improvement is observed. The paper also obtains the scaling plot for a vision transformer for image denoising. Finally, the paper obtains some theoretical guarantees for a related problem: low-rank linear denoising.
Strengths And Weaknesses
Strengths:
Obtains some empirical results on the best accuracy that could be obtained for different numbers of training examples for two image reconstruction tasks.
Obtains some theoretical guarantees for another image reconstruction task.
Weaknesses:
The scaling of the transformer on the MRI reconstruction is missing
The literature review of the theoretical guaranteed for SVD reconstruction is weak. The authors fail to discuss how the results from Theorems 1 and 2 compare to the already existing theoretical results from the literature.
Theorem 2 is a very loose bound that has a rate worse than Theorem 1 because of the 1/sqrt(N) term plus (7+2 xi)R(W*). From Theorem 2 it would result that early stopping is worse than PCA, which is inconsistent with the experiments.
Clarity, Quality, Novelty And Reproducibility
There is novelty in both the scaling empirical plots and the theoretical guarantees but it is not clear whether Theorem 1 has already been proved in the literature or not and the authros fail to discuss it. The error bars in Fig 3 are hidden behind the line symbols, which are quite useless. |
ICLR | Title
Scaling Laws For Deep Learning Based Image Reconstruction
Abstract
Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems. Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains. In this work, we study whether major performance gains are expected from scaling up the training set size. We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size. For all three tasks we find that an initially steep powerlaw scaling slows significantly already at moderate training set sizes. Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance. To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent. The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
1 INTRODUCTION
Deep neural networks trained to map a noisy measurement of an image or a noisy image to a clean image give state-of-the-art (SOTA) performance for image reconstruction problems. Examples are image denoising (Burger et al., 2012; Zhang et al., 2017; Brooks et al., 2019; Liang et al., 2021), super-resolution (Dong et al., 2016; Ledig et al., 2017; Liang et al., 2021), and compressive sensing for computed tomography (CT) (Jin et al., 2017), and accelerated magnetic resonance imaging (MRI) (Zbontar et al., 2018; Sriram et al., 2020; Muckley et al., 2021; Fabian & Soltanolkotabi, 2022).
The performance of a neural network for imaging is determined by the network architecture and optimization, the size of the network, and the size and quality of the training set.
Significant work has been invested in architecture development. For example, in the field of accelerated MRI, networks started out as convolutional neural networks (CNN) (Wang et al., 2016), which are now often used as building blocks in un-rolled variational networks (Hammernik et al., 2018; Sriram et al., 2020). Most recently, transformers have been adapted to image reconstruction (Lin & Heckel, 2022; Huang et al., 2022; Fabian & Soltanolkotabi, 2022).
However, it is not clear how substantial the latest improvements through architecture design are compared to potential improvements expected by scaling the training set and network size. Contrary to natural language processing (NLP) models and modern image classifiers that are trained on billions of examples, networks for image reconstruction are only trained on hundreds to thousands of example images. For example, the training set of SwinIR (Liang et al., 2021), the current SOTA for image denoising, contains only 10k images, and the popular benchmark dataset for accelerated MRI consists only of 35k images (Zbontar et al., 2018).
In this work, we study whether neural networks for image reconstruction only require moderate amounts of data to reach their peak performance, or whether major boosts are expected from increasing
the training set size. To partially address this question, we focus on three problems: image denoising, reconstruction from few and noisy measurements (compressive sensing) in the context of accelerated MRI, and super-resolution. We pick Gaussian denoising for its practical importance and since it can serve as a building block to solve more general image reconstruction problems well (Venkatakrishnan et al., 2013). We pick MR reconstruction because it is an important instance of a compressive sensing problem, and many problems can be formulated as compressive sensing problems, for example super-resolution and in-painting. In addition, for MR reconstruction the question on how much data is needed is particularly important, since it is expensive to collect medical data.
For the three problems we identify scaling laws that describe the reconstruction quality as a function of the training set size, while simultaneously scaling network sizes. Such scaling laws have been established for NLP and classification tasks, as discussed below, but not for image reconstruction.
The experiments are conducted with a U-Net (Ronneberger et al., 2015) and the SOTA SwinIR (Liang et al., 2021), a transformer architecture. We primarily consider the U-Net since it is widely used for image reconstruction and acts as a building block in SOTA models for image denoising (Brooks et al., 2019; Gurrola-Ramos et al., 2021; Zhang et al., 2021; 2022) and accelerated MRI (Zbontar et al., 2018; Sriram et al., 2020). We also present results for denoising with the SwinIR. The SwinIR outperforms the U-Net, but we find that its scaling with the number of training examples does not differ notably. Our contributions are as follows:
• Empirical scaling laws for denoising. We train U-Nets of sizes 0.1M to 46.5M parameters with training set sizes from 100 to 100k images from the ImageNet dataset (Russakovsky et al., 2015) for Gaussian denoising with 20.17dB Peak-Signal-to-Noise ratio (PSNR). While for the largest training set sizes and network sizes we consider, performance continues to increase, the rate of performance increase for training set sizes beyond a few thousand images slows to a level indicating that even training on millions of images only yields a marginal benefit, see Fig. 1(a).
We also train SOTA SwinIRs of sizes 4M to 129M parameters on the same range of training set sizes, see Fig. 2(a). Albeit it performs better than the U-Net, its scaling behavior is essentially equivalent, and again after a few thousand images, only marginal benefits are
expected. Scaling up its training set and network size did, however, give benefits, the largest model we trained yields new SOTA results on four common test sets by 0.05 to 0.22dB.
• Empirical scaling laws for compressive sensing. We train U-Nets of sizes from 2M to 500M parameters for 4x accelerated MRI with training set sizes from 50 to 50k images from the fastMRI dataset (Knoll et al., 2020b). We again find that beyond a dataset size of about 2.5k, the rate of improvement as a function of the training set size considerably slows, see Fig. 1(c). This indicates that while models are expected to improve by increasing the training set size, we expect that training on millions of images does not significantly improves performance.
• Empirical scaling laws for super-resolution. We also briefly study super-resolution, and similarly as for denoising and compressive sensing find a slowing of the scaling law at moderate dataset sizes. See appendix C.
• Understanding scaling laws for denoising theoretically. Our empirical results indicate that the denoising performance of a neural network trained end-to-end doesn’t increase as a function of training examples beyond a certain point. This is expected since once we reach a noise specific error floor, more data is not beneficial. We make this intuition precise for a linear estimator learned with early stopped gradient descent to denoise data drawn from a d-dimensional linear subspace. We show that the reconstruction error is upper bounded by d/N plus a noise-dependent error floor level. Once the error induced by learning the signal model, d/N , is small relative to the error floor, more training examples N are not beneficial.
Together, our empirical results show that neural networks for denoising, compressive sensing, and super-resolution applied in typical setups (i.e, Gaussian denoising with 20.17dB PSNR, and multi-coil accelerated MRI with 4x acceleration) already operate in a regime where the scaling laws for the training set size are slowing significantly and thus even very large increases of the training data are not expected to improve performance substantially. Even relatively modest model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
2 RELATED WORK
Scaling laws for prediction problems. Under the umbrella of statistical learning theory convergence rates of 1/N or 1/ √ N have been established for a range of relatively simple models and distributions, see e.g. Wainwright (2019) for an overview.
For deep neural networks used in practice, a recent line of work has empirically characterized the performance as a function of training set size and/or network size for classification and NLP (Hestness et al., 2017; Rosenfeld et al., 2019; Kaplan et al., 2020; Bahri et al., 2021; Zhai et al., 2021; Ghorbani et al., 2021; Bansal et al., 2022). In those domains, the scaling laws persist even for very large datasets, as described in more detail below. In contrast, for image reconstruction, we find that the power-law behavior already slows considerably at relatively small numbers of training examples.
Rosenfeld et al. (2019) find power-law scaling of performance with training set and network size across models and datasets for language modeling and classification. However, because the work fixes either training set or network size, while scaling the other, the scaling laws span only moderate ranges before saturating at a level determined by the fixed quantity and not the problem specific error floor. The papers Kaplan et al. (2020); Bahri et al. (2021); Zhai et al. (2021); Ghorbani et al. (2021); Bansal et al. (2022) including ours scale the dataset size and model size simultaneously, resulting in an improved predictive power of the obtained scaling laws.
Hestness et al. (2017) study models for language and image classification, and attribute deviations from a power-law curve to a lack of fine-tuning the hyperparameters of very large networks. For transformer language models Kaplan et al. (2020) find no deviation from a power-law for up to a training set and network size of 1B images and parameters. Further, Zhai et al. (2021) find the performance for Vision Transformers (Dosovitskiy et al., 2020) for few-shot image classification to deviate from a power-law curve only at extreme model sizes of 0.3-1B parameters and 3B images.
The role of training set size in inverse problems. For image reconstruction and inverse problems in general, we are not aware of work studying scaling laws in a principled manner, covering different
problems and model architectures. But, when proposing a new method several works study performance as a function of training set and/or network size. However, those studies typically only scale the parameter of interest while fixing all other parameters, unlike our work, which scales dataset size and network size together, which is important for identifying scaling laws. Below, we review the training set sizes, and if available their scaling properties, used by the recent SOTA in image denoising and accelerated MRI.
Zhang et al. (2017) report that for DnCNN, a standard CNN, using more than 400 distinct images with data augmentation only yields negligible improvements. Chen et al. (2021) pre-train an image processing transformer (IPT) of 115.5M network parameters on ImageNet (1.1M distinct images), and report the performance after fine-tuning to a specific task as a function of the size of the pretraining dataset. IPT’s performance for denoising is surpassed by the latest SOTA in form of the CNN based DRUnet (Chen et al., 2021), the transformer based SwinIR (Liang et al., 2021) and the Swin-Conv-Unet (Zhang et al., 2022) a combination of the two. Those models have significantly fewer network parameters and were trained on a training set consisting of only ∼10k images, leaving the role of training set size in image denoising open.
The number of available training images for accelerated MRI is limited to few publicly available datasets. Hence, current SOTA (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022) are trained on the largest datasets available (the fastMRI dataset), consisting of 35k images for knee and 70k for brain reconstruction (Zbontar et al., 2018; Muckley et al., 2021).
Adaptations of the MLP-Mixer (Tolstikhin et al., 2021) and the Vision Transformer (Dosovitskiy et al., 2020) to MRI (Mansour et al., 2022; Lin & Heckel, 2022) ablate their performance as a function of the training set size and find that non-CNN based approaches require more data to achieve a similar performance, but potentially also benefit more from even larger datasets. However, those experiments are run with fixed model sizes and only 3-4 realizations of training set size thus it is unclear when performance saturates.
3 EMPIRICAL SCALING LAWS FOR DENOISING
In this section, we consider the problem of estimating an image x from a noisy observation y = x+z, where z is additive Gaussian noise z ∼ N (0, σ2zI). Neural networks trained end-to-end to map a noisy observation to a clean image perform best, and outperform classical approaches like BM3D (Dabov et al., 2007) and non-local means (Buades et al., 2005) that are not based on training data.
Datasets. To enable studying learned denoisers over a wide range of training set sizes we work with the ImageNet dataset (Russakovsky et al., 2015). Its training set contains 1.3M images of 1000 different classes. We reserve 20 random classes for validation and testing. We design 10 training set subsets SN of sizes N ∈ [100, 100000] (see Fig. 1(a)) and with Si ⊆ Sj for i ≤ j. To make the distributions of images between the subsets as homogeneous as possible, the number of images from
different classes within a subset differ by at most one. We generate noisy images by adding Gaussian noise with variance of σz = 25 (pixels in the range from 0 to 255), i.e. 20.17dB in PSNR.
Model variants and training. We train a CNN-based U-Net, a detailed description of the model architecture is in Appx. A.1. We also train the Swin-Transformer (Liu et al., 2021) based SwinIR model (Liang et al., 2021) that achieves SOTA for a variety of image reconstruction tasks, including denoising. This model is interesting since transformers scale well with the number of training examples for other domains, e.g., for image classification pre-training (Dosovitskiy et al., 2020).
For U-Net and SwinIR we vary the number of network parameters as depicted in Fig. 1 (b) and Fig. 2 (b) respectively. Appx. A.1 and A.3 contain detailed descriptions on how the models are trained and the network size is adapted.
Results and discussion. Fig. 1(a) and Fig. 2(a) show the reconstruction performances of the best U-Net and SwinIR respectively over all considered network sizes. Our main findings are as follows.
A training set size of around 100 is already sufficient to train a decent image denoiser. Beyond 100, we can fit a linear power law to the performance of the U-Net with a scaling coefficient α = 0.0048 that approximately holds up to training set sizes of about 6k images. Beyond 6k, we can fit a second linear power law with significantly smaller scaling coefficient α = 0.0019.
Similarly, for the SwinIR we can fit a power law with coefficient α = 0.0050 in the regime of little training data and a power law with significantly smaller coefficient α = 0.0017 in the regime of moderate training data, thus the two architectures scale essentially equivalently.
While denoising benefits from more training examples, the drop in scaling coefficient indicates that scaling from thousands to millions of images is only expected to yield relatively marginal performance gains. While the gains are small and we expect gains from further scaling to be even smaller, our largest SwinIR trained on 100k images yields a new SOTA for Gaussian color image denoising. On four common test sets it achieves an improvement between 0.05 to 0.22dB, see Appx. B. Visualizations of reconstructions from models along the curves in Fig. 1(a) and Fig. 2(a) can be found in Fig. 4, Appx. A.
Comparing the U-Net scaling to SwinIR scaling (Fig. 2(a)) we see that the effect of large training sets on the performance of the U-Net can not make up for the improved modeling error that stems from the superior network architecture of the SwinIR. In fact, the simulated scaling coefficient α = 0.0019 predicts a required training set size of about 150M images for U-Net to achieve the best performance of the SwinIR, and that is an optimistic prediction since it assumes that the scaling does not slow down further for another 3 orders of magnitude. Thus, model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
An interesting question is whether different noise levels lead to different data requirements, as indicated by a different scaling behavior. We investigate the performance of U-Net for a smaller noise variance σz = 15 in Appx. D.2. While the results are still preliminary, the qualitative scaling behavior is similar; the difference are that the scaling law pertaining to smaller noise is slightly steeper, and overall the performance improves by about 1.3dB in PSNR.
Finally, note that for our results we choose to re-sample the noise per training image in every training epoch, since it makes the best use of the available clean images. However, fixing the noise is also interesting, since it is more similar to a setup in which the noise statistics are unknown like in real-world noise removal. See Appx. D.1 for additional results where the noise is fixed. We found that compared to re-sampling the noise the performance of a U-Net drops by 0.3dB for small training set sizes. As the training set size increases the performance approaches the one of re-sampling the noise resulting in a steeper scaling law that flattens at slightly larger training set sizes.
Robustness of our finding of slowing scaling laws. Our main finding for denoising is that the scaling laws slow at relatively small amounts of training data points (i.e., the scaling coefficient becomes very small). We next argue why we think that this is a robust finding.
In principle it could be that the networks are not sufficiently large for the performance to improve further as the dataset size increases. However, for the U-Net and training set sizes of 10k and 30k the
performance increase already slows, and for those training set sizes, increasing the network size does not improve performance, as shown in Fig. 1(b). For smaller network sizes the curves in Fig. 1(b) first increase and then decrease indicating that we found network sizes close to the optimum.
The parameter scaling of the SwinIR in Fig. 2(b) indicates that for training set sizes of 10k and 100k the performance still benefits from larger networks. Hence, the power law for those training set sizes in Fig. 2(a) can become slightly steeper once we further increase the network size. However, for the slope of the flat power law in (a) to reach the slope of the steep power law, the parameter scaling in (b) for 100k training images would need to hold for another 2 orders of magnitude, i.e. about 12B instead of the currently used 129M network parameters, which is very unlikely considering that the current network sizes already suffice to saturate the performance for small/moderate training set sizes.
Finally, it could be that with higher quality or higher diversity of the data, the denoising performance would further improve. In our experiments, we increase the training set sizes by adding more images from the same classes of ImageNet, and we continue to test on different classes. Thus, it could be that adding training examples from a different, more diverse data source leads to less of a slowing of the scaling law in Fig. 1(a). We test this hypothesis by scaling our training set with 3k images to 10k images not by adding more images from ImageNet but images from the datasets that are used to train the original SwinIR, i.e., we add all images from DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017) and BSD500 (Arbeláez et al., 2011) and add the remaining images from WED (Ma et al., 2017). Keeping all other hyperparameters fixed the U-Net obtains for this dataset a PSNR of 32.239dB, which is slightly worse than the 32.252dB it achieves with our training set with 10k images only from ImageNet. Hence, we reject the hypothesis that the drop in scaling coefficients can be explained with how the training sets are designed in our experiment.
4 EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING
We consider compressive sensing (CS) to achieve 4x accelerated MRI. MRI is an important medical imaging technique for its non-invasiveness and high accuracy. Due to the physics of the measurement process, MRI is inherently slow. Accelerated MRI aims at significantly reducing the scan time by undersampling the measurements. In addition, MRI scanners rely on parallel imaging, which uses multiple receiver coils to simultaneously collect different measurements of the same object.
This leads to the following reconstruction problem. We are given measurements yi ∈ Cm of the image x ∈ Cn as yi = MFSix+ noisei for i = 1, ..., C, and our goal is to reconstruct the image from those measurements. Here, C is the number of receiver coils, the diagonal matrices Si ∈ Cn×n are the sensitivity maps modelling the signal strength perceived by the i-th coil, F ∈ Cn×n is the discrete Fourier transform (DFT), and M ∈ Cm×n is a binary mask implementing the undersampling. Classical CS approaches (Lustig et al., 2008) first estimate the sensitivity maps with a method such as ESPIRiT (Uecker et al., 2014), and second estimate the unknown image by solving a regularized optimization problem, such as total-variation norm minimization. Recently, deep learning based methods have been shown to significantly outperform classical CS methods due to their ability to learn more complex and accurate signal models from data (Knoll et al., 2020a; Muckley et al., 2021).
Datasets. To explore the performance of learning based MR reconstruction over a wide range of training set sizes, we use the fastMRI multi-coil brain dataset (Knoll et al., 2020b), which is the largest publicly available dataset for MRI. The dataset consists of images of different contrasts. To ensure that the statistics of the dataset are as homogeneous as possible, we take the subset of the dataset corresponding to a single contrast resulting in 50k training images. We design training set subsets SN of size N ∈ [50, 50000] (see Fig. 1 (c)) with Si ⊆ Sj for i ≤ j. For more information on selection, division and subsampling of the dataset see Appx. A.2.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with 4 blocks per encoder/decoder. The network is trained end-to-end to map a coarse reconstruction x̂CR to the ground truth image x. The coarse reconstruction is obtained as
x̂CR = (∑C i=1 |F−1yi,ZF|2 )1/2
,where F−1 is the inverse DFT, and yi,ZF are the undersampled measurements of the i-th coil, where missing entries are filled with zeros. We vary the number of
network parameters as depicted in Fig. 1 (d). Appx. A.2 contains detailed descriptions on how the model is trained and the network size is adapted.
Results and discussion. For each training set size Fig. 1 (c) shows the reconstruction performance in structural similarity (SSIM) of the best model over all simulated network sizes. There are 2 main findings. First, we can fit a linear power law with a scaling coefficient α = 0.0072 that holds up to training set sizes of about 2.5k images. Second, for training set sizes starting from 5k we can fit a second linear power law but with significantly smaller scaling coefficient α = 0.0026.
Similar to denoising in Section 3 we conclude that while accelerated MRI slightly benefits form more training examples, the drop in scaling coefficient indicates that it is unlikely that even training set sizes of the order of hundreds of millions of images result in substantial gains. Visualizations of reconstructions from models along the curve in Fig. 1 (c) can be found in Fig. 5, Appx. A.2.
Robustness of our finding of slowing scaling laws. Next, we argue why the drop in scaling coefficient for accelerated MRI is a robust finding.
In Fig. 1 (d) we demonstrate that the network size does not bottleneck the performance of our largest training sets. For each training set size the performance as a function of the number of network parameters is relatively flat. Even small training sets benefit from large networks before their performance slightly decays for very large networks. Hence, we expect that for large training sets the performance as a function of network parameters would not increase significantly before decaying.
Finally, is there a different type of training data that would improve the scaling coefficient? We don’t think so, since all experiments are for the very specific task of reconstructing brain images of one particular contrast, which means that the examples that we add to increase the size of the training sets is already the data most suitable to solve this task.
5 UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY
We study a simple linear denoiser that is trained end-to-end to reconstruct a clean image from a noisy observation, in order to understand how the error as a function of the number of training examples for inverse problems is expected to look like.
We define a joint distribution over a signal and the corresponding measurement (x,y) as follows. Consider a d < n dimensional subspace of Rn parameterized by the orthonormal basis U ∈ Rn×d. We draw a signal approximately uniformly from the subspace x = Uc, where c ∼ N (0, I), and an associated noisy measurement as y = x+ z, where z ∼ N (0, σ2zI) is Gaussian noise. The subspace is unknown, but we are given a training set {(x1,y1), . . . , (xN ,yN )}, consisting of examples drawn iid from the joint distribution over (x,y).
We assume that the signal lies in a low dimensional subspace. Assuming that data lies in a lowdimensional subspace or more general, a union of low-dimensional subspaces, is common, for example it underlies the denoising of natural images via wavelet thresholding (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). Mohan et al. (2020) found that even deep learning based denoisers implicitly perform a projection onto an adaptively-selected low-dimensional subspace that captures the features of natural images.
We consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error, defined as R(W) = 1dE [ ∥Wy − x∥22 ] , where
expectation is over the joint distribution of (x,y).
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk R) is given by W∗ = 11+σ2z UU
T . The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
An estimator based on subspace estimation. The optimal estimator W∗ requires knowledge of the unknown subspace. We can learn the subspace from the noisy data by performing principal component analysis on Y = [y1, . . . ,yN ]. Specifically, we estimate the subspace as the d-leading
singular vectors of the empirical co-variance matrix YYT ∈ Rn×n, denoted by by Û ∈ Rn×d. If the measurement noise is zero (i.e., σ2z = 0), Û is an orthonormal basis for the d-dimensional subspace U provided we observe at least d many linearly independent signals, which occurs with probability one if we draw data according to the model defined above, and if the number of training examples obeys N ≥ d. There is a vast literature on PCA; see Vershynin (2011; 2018); Rudelson & Vershynin (2010); Tropp (2012) for tools from high-dimensional probability to analyze the PCA estimate.
Now consider the estimator WPCA = 11+σ2z ÛÛ Ty based on N noisy training points. The estimator assumes knowledge of the noise variance, but it is not difficult to estimate it relatively accurately. The following result, proven in Appx. F, characterizes the associated risk. Theorem 1. Suppose that the number of training examples obeys (d + nσ2z) log(n) ≤ N . For a numerical constant c, with probability at least 1− n−10 − 3e−d + e−n, the risk of the PCA-estimate is bounded by
R (WPCA) ≤ R(W∗) + c(d+ nσ2z) log(n)/N.
Thus, as long as the number of training examples N is sufficiently large relative to (d+ nσ2z), the risk of the PCA-estimator is close to the risk of the optimal estimator.
Estimator learned end-to-end. We now consider an estimator learned end-to-end, by applying gradient descent to the empirical risk L(W) = ∑N i=1 ∥Wyi − xi∥ 2 2 and we regularize via earlystopping the gradient descent iterations. The risk of the estimate after k iterations of gradient descent, Wk, is bounded by the next result. This estimator mimics the supervised training we consider throughout this section, with the difference that here we consider a simple neural network, and in the previous sections we trained a neural network. Theorem 2. Let Wk be the matrix obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W). Consider the regime where the number of training examples obeys (d + nσ2z) log(n) ≤ N ≤ ξd/σ2z for an arbitrary ξ and N log(N) ≤ n. For an appropriate choice of the stepsize η there exists an optimal early stopping time kopt at which the risk of the estimator fW(y) = Wkopty is upper-bounded with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2 by
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N ,
where c is a numerical constant.
The proof is provided in Appx. G. Similar to Thm. 1 the first term in the bound corresponds to the noise-dependent error floor and the second two terms decrease in (d + nσ2z)/N and represent the error induced by learning the estimator with a finite number of training examples. Once this error is small relative to the error floor, more training examples do not improve performance. See Appx. E for a more detailed discussion on the estimator and the assumptions of Thm. 2.
Discussion. In Fig. 3, we plot the risk as a function of the training set size for the early-stopped empirical risk minimization (ERM) and the PCA based estimator. Starting at a training set size of N = 100, we see that the risk minus the optimal risk follows approximately a power law, i.e., log(R(W)−R(W∗)) ≈ −α log(N), as suggested by Thms. 1 and 2. In practice, however, we don’t know the risk of the optimal estimator, R(W∗). Therefore, in the second row of Fig. 3 and throughout our empirical results we plot the risk log(R(W)) as a function of the number of training examples log(N) and distinguish different regions by fitting approximated scaling coefficients α to log(R(W)) ≈ −α log(N). In our theoretical example, we can identify three regions, coined in Hestness et al. (2017): The small data region in which the training set size does not suffice to learn a well-performing mapping and the power-law region in which the performance decays approximately as Nα with α < 0, and an problem-specific irreducible error region, which is R(W∗) here. Note that the power-law coefficient in the risk plot R(W) are smaller then those when plotting R(W)−R(W∗), an are only approximate scaling coefficients. Already in this highly simplified setup of an inverse problem the true scaling coefficients heavily depend on model parameters such as the noise variance as can be seen in Fig. 3. The scaling coefficients further vary with the signal dimension d and ambient dimension n (see Appx. E.1).
6 CONCLUSION
We found that the performance improvement of deep learning based image reconstruction as a function of the number of training examples slows already at moderate training set sizes, indicating that only marginal gains are expected beyond a few thousand examples.
Limitations. This finding is based on studying three different reconstruction problems (denoising, super-resolution, and compressive sensing), two architectures (U-net and SwinIR), and for each setup we extensively optimized hyperparameters and carefully scaled the networks. Scaling gave new state-of-the-art results on four denoising datasets, which provides some confidence in the setup.
Moreover, our statements necessarily pertain to the architectures and metrics we study. It is possible that other architectures or another scaling of architectures can yield larger improvements when scaling architectures. It is also widely acknowledged that image quality metrics (such as SSIM and PSNR) do not fully capture the perceived image quality by humans, and it could be that more training data yields improvements in image quality that are not captured well by SSIM and PSNR.
Most importantly, our findings pertain to standard in-distribution evaluation, i.e., the test images are from the same distribution as the training images. To achieve robustness against testing on images out of the training distribution recent work indicates a positive effect of larger and more diverse training sets (Miller et al., 2021; Darestani et al., 2022; Nguyen et al., 2022). Thus it is possible that the out-of-distribution performance of image reconstruction methods improves when trained on a larger and a more diverse dataset, even though the in-distribution performance does not improve further.
Future research. We focus on supervised image reconstruction methods. For image denoising and other image reconstruction problems self-supervised approaches are also performing very well (Laine et al., 2019; Wang et al., 2022; Zhou et al., 2022), even though supervised methods perform best if clean images are available. It would be very interesting to investigate the scaling laws for selfsupervised methods; perhaps more training examples are required to achieve the performance of a supervised setup. We leave this for future work.
REPRODUCIBILITY
The repository at https://github.com/MLI-lab/Scaling_Laws_For_Deep_ Learning_Based_Image_Reconstruction contains the code to reproduce all results in the main body of this paper.
ACKNOWLEDGMENTS
The authors acknowledge support by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456465471, 464123524, the DAAD, and the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
A DETAILS OF THE EXPERIMENTAL SETUPS
A.1 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a U-Net presented in Fig.1 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 1 (a). In these examples the improvement in perceived image quality from increasing the training set size from 100 to 1000 is larger than from increasing from 1000 to 10000 or from 10000 to 100000. This correlates with our quantitative findings in Fig. 1 (a).
Next, we describe the experimental details. We train U-Nets with two blocks in the encoder and decoder part respectively and skip connections between blocks. Each block consists of two convolutional layers with LeakyReLU activation and instance normalization (Ulyanov et al., 2017) after every layer, where the number of channels is doubled (halved) after every block in the encoder (decoder). The downsampling in the encoder is implemented as average pooling and the upsampling in the decoder as transposed convolutions. As proposed by Zhang et al. (2017) we train a residual denoiser that learns to predict y − x instead of directly predicting x, which improves performance. We scale the network size by increasing the width of the network by scaling the number of channels per (transposed) convolutional layer. For denoising we trained U-Nets of 7 different sizes. We vary the number of channels in the first layer in {16, 32, 64, 128, 192, 256, 320}, which corresponds to {0.1, 0.5, 1.9, 7.4, 16.7, 30.0, 46.5} million parameters. We do not vary the depth, since this would change the dimension of the informational bottleneck in the U-Net and thus change the model family itself.
The exact training set sizes we consider are {0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60, 100} thousand images from ImageNet. We center crop the training images to 256 × 256 pixels. We also tried a smaller patch size of 128× 128 pixels, but larger patches showed to have a better performance in the regime of large training set sizes, which is why the results presented in the main body are for patch size 256× 256. For a comparison between the scaling behavior of the two different patch sizes see Fig. 9, Appx. D.3.
We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. For validation and testing we use 80 and 300 images respectively taken from 20 random classes that are not used for training.
We use mean-squared-error loss and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999. For moderate training set sizes up to 3000 images we find that heuristically adjusting the initial learning rate with the help of an automated learning rate annealing performs well. To this end, we start with a learning rate of 10−4 and increase after every epoch by a factor of 2 until the validation loss does not improve for 3 consecutive epochs. We then load the model checkpoint from the learning rate that was still performing well and continue with that learning rate. We observe that this scheme typically picks an initial learning rate reduced by factor 2 for every increase in the number of channels C. For training set sizes larger than 3000 we directly apply this rule to pick an initial learning rate without annealing as we found this to give slightly better results. In particular, for number of channels C = {128, 192, 256, 320} we used initial learning rates η = {0.0032, 0.0016, 0.0008, 0.0004}. For small training sets up to 1000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10 and found that further increasing the batch size does not improve performance.
We do not put a limit on the amount of compute for training. We use an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 10 epochs or 6 epochs for training set sizes starting from 6000 images. Once the learning rate drops to 10−5 we observe near to no gains in validation loss and stop the training after 10 additional epochs.
For training set sizes up to 1000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1800 GPU hours for the experiments in Fig. 1(a),(b).
A.2 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for compressive sensing MRI presented in Fig.1 (c),(d) and Section 4.
In addition, Fig. 5 shows examples of reconstructions from different models along the performance curve in Fig. 1 (c). In these examples the improvement in perceived image quality from increasing the training set size from 500 to 2500 is larger than from increasing from 2500 to 10000 or from 10000 to 50000. This correlates with our quantitative findings in Fig. 1 (c).
The first in row in Fig. 5 shows an example in which all models including the one trained on the largest training set fail to recover a fine detail. This is a known problem in accelerated MRI and has been documented in both editions of the fast MRI challenge (Knoll et al., 2020a; Muckley et al., 2021) in which for all methods examples could be found in which fine details have not been recovered. However, the question remains if more training data or better models would help or the details are simply not there since the information is lost due to the large undersampling factors considered in the fastMRI challenges and in this work.
Next, we describe the experimental details. For compressed sensing in the context of accelerated MRI we trained U-Nets of 14 different sizes. We vary the number of channels in the first layer in {16, 32, 48, 64, 96, 112, 128, 144, 160, 176, 192, 208, 224, 256}, which corresponds to {2, 8, 18, 31, 70, 95, 124, 157, 193, 234, 279, 327, 380, 496} million network parameters. The exact training set sizes we consider are {0.05, 0.25, 0.5, 1, 2.5, 5, 10, 25, 50} thousand AXT2 weighted images from fastMRI multi-coil brain dataset (Zbontar et al., 2018), where AXT2 corresponds to all images of one type of contrast. We focused on images only from this type to make the statistics of our datasets as homogeneous as possible. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. We use 4732 and 730 additional images for testing and validation.
We consider an acceleration factor of 4 meaning that we only measure 25% of the information. We obtain the 4 times undersampled measurements by masking the fully sampled measurement y with an equispaced mask with 8% center fractions meaning that in the center of y we take all the measurements and take the remaining ones at equispaced intervals.
We use structural similarity (SSIM) loss and RMSprop optimizer with α = 0.99 as this is the default in the fastMRI repository (Zbontar et al., 2018) and we found no improvement by replacing it with Adam. We do not put a limit on the amount of compute invested into training. We deploy an automated learning rate decay that starts at a learning rate of 10−3 and decays by a factor 0.1 if the validation SSIM has not improved by at least 10−4 for 5 epochs. Once the learning rate drops to 10−6 we stop the training after 10 additional epochs. Only for the largest training set sizes 25k,50k we found that an additional drop to 10−7 resulted in further performance gains. We use a batch size of 1.
For training set sizes up to 5k we train three models with random seeds and pick the best. For larger training set sizes we only run one seed, since the variance between runs decreased.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 4250 GPU hours for the experiments in Fig. 1 (c),(d).
A.3 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH THE
SWINIR
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a SwinIR presented in Fig.2 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 2 (a). Despite of the best SwinIR clearly outperforming the best U-Net in terms of PSNR
it is difficult for the naked eye to notice large differences in the quality of the reconstructions. This indicates that for Gaussian denoising both models already operate in a regime, where improvements to be made are only marginal.
Next, we describe the experimental details. To obtain Fig. 2 we train the SwinIR for color denoising from Liang et al. (2021) on the same training sets mentioned in Appx. A.1 with {0.1, 0.3, 1, 10, 100} thousand images from ImageNet. Instead of center cropping the training images to 256× 256 we have to crop to 128× 128 pixels as larger input patches would make it computational infeasible for us to train large versions of the SwinIR. The largest SwinIR alone took over 2 months to train on 4 NVIDIA A40 GPUs.
We train 4 different network sizes with {3.7, 11.5, 41.8, 128.9} million parameters. We denote the four network sizes as small(S)/middle(M)/large(L)/huge(H).
The training details and network configurations are as follows. The default SwinIR for denoising (Liang et al., 2021) was proposed for a training set size of about 10k images, 11.5M network parameters and was trained with a batch size of 8 for T =1280 epochs, where the learning rate is halved at [0.5T ,0.75T , 0.875T , 0.9375T ] epochs. We keep the learning rate schedule but adjust the maximal number of epochs T according to the training set size. Table 1 shows batch size and maximal number of epochs for every experiment in Fig. 2. We did not optimize over the choice of the batch size but picked the batch size as prescribed by the availability of computational resources.
See Liang et al. (2021) for a detailed description of the SwinIR network architecture. We vary the network size by adjusting the number of residual Swin Transformer blocks, the number of Swin Transformers per block, the number of attention heads per Swin Transformer, the number of channels in the input embedding and the width of the fully connected layers in a Swin Transformer. Table 2 contains a summary of the settings. When scaling up the network size, we invested in the parameters that seemed to be most promising in the ablation studies in Liang et al. (2021).
The experiments were conducted on four NVIDIA A40 and four NVIDIA RTX A6000. We used about 13000 GPU hours for the tranformer experiments in Fig. 2. For training the models in parallel on multiple GPUs we utilize the torch.distributed package with the glow backend, instead of the faster nccl backend, which was unfortunately not available on our hardware at that time.
B BENCHMARKING OUR MODELS FOR GAUSSIAN IMAGE DENOISING
In this Section, we evaluate the models we trained for image denoising in Section 3 on four common test sets form the literature and show that the largest SwinIR trained on the largest dataset achieves new SOTA for all four test sets and the considered noise level.
In Fig. 2 in the main body we compared the performance of the U-Nets and the SwinIRs trained on subsets of ImageNet for image denoising. The models are evaluated on a test set sampled from ImageNet. We observed that while SwinIRs significantly outperform U-Nets, the performance gain from increasing the training set size slows already at moderate training set sizes for both architectures equally. However, there is still a moderat performance gain in scaling the models, and thus we expect the largest SwinIR trained on the largest dataset to outperform the original SwinIR from Liang et al. (2021). Our results in this section show that this is indeed the case.
In Table 3 we evaluate on the standard test sets for Gaussian color image denoising CBSD68 (Martin et al., 2001), Kodak24 (Franzen, 1999), McMaster (Zhang et al., 2011) and Urban100 (Huang et al., 2015). We observe a significant performance difference between the best U-Net (46.5M parameters, 100k training images) and the other transformer based methods. As expected, our largest SwinIR trained on the largest dataset SwinIR 100/H outperforms the original SwinIR, but also the SCUnet (Zhang et al., 2022) a later SOTA model that has been demonstrated to outperform the original SwinIR.
We also depict the gains for the SwinIR from just scaling up the network size and then from scaling up network size and training set size. While this led to a new SOTA for Gaussian image denoising, note that on the downside training the SwinIR 10/M, which is comparable to the original SwinIR, took about 2 weeks on 4 NVIDIA A40 gpus, while training the SwinIR 100/H took over 2 months.
C EMPIRICAL SCALING LAWS FOR IMAGE SUPER-RESOLUTION WITH A U-NET
In this Section, we consider the problem of super-resolution, i.e., estimating an high-resolution image from a low-resolution version of the image. This can be viewed as a compressive sensing problem, since we can view the super-resolution problem as reconstructing a signal x ∈ Rn from a downsampled version y = Ax, where the matrix A ∈ Rm×n implements a downsampling operation like bicubic downsampling, or blurring followed by downsampling. As first shown in the pioneering work of Dong et al. (2014) data driven neural networks trained end-to-end outperform classical model-based approaches (Gu et al., 2012; Michaeli & Irani, 2013; Timofte et al., 2013).
Dong et al. (2014) reports that for super-resolution with a simple three-layer CNN, the gains from very large training sets do not seem to be as impressive as in high-level vision problems like image classification. Liang et al. (2021) plot the super-resolution performance of the SOTA SwinIR model as a function of the training set size up to 3600 training examples for a fixed network size. When plotting their results on a logarithmic scale, we observe that the performance improvement follows a power-law. It is unclear, however, whether this power law slows beyond this relatively small number of images.
In this Section, we obtain scaling laws for super-resolution for a U-Net over a wide range of training set and network sizes, similar as for denoising and compressive sensing in the main body.
Datasets. We use the same training, validation and test sets as described in Appx. A.1 with training set sizes N ∈ [100, 100k] images from ImageNet. On top we add two larger training sets of size 300k and 600k. Instead of center cropping the training images to 256× 256 we follow the super-resolution experiments in Liang et al. (2021) and train on images cropped to 128 × 128 pixels. We consider super-resolution of factor 2 so the low-resolution images have size 64 × 64. The low-resolution images are obtained with the bicubic downsampling function of Python’s PIL.Image package.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with only one block per encoder/decoder as this resulted in slightly better results than with two blocks. The network is trained end-to-end to map a coarse reconstruction, obtained through bicubic upsampling, to the residual between the coarse reconstruction and the high-resolution ground truth image.
We vary the number of the channels in the first layer in {8, 16, 32, 64, 128, 192, 256, 320, 448, 564}, which corresponds to {0.007, 0.025, 0.10, 0.40, 1.6, 3.6, 6.4, 10.0, 20.0, 31.2} million network parameters. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples.
We use the ℓ1-loss and Adam optimizer with its default settings. For all experiments we find a good initial learning rate with the same annealing strategy as described in Appx. A.1. However, instead of picking the largest learning rate for which the validation loss does not diverge, we pick the second largest, which leads to slightly more stable results. We start the annealing with learning rate of 10−5. In the few cases, where our heuristic leads to a degenerated training curve, typically due to picking a significantly too small or too large learning rate, starting the annealing with a smaller learning rate of 10−6 resolves the problem.
We do not put a limit on the amount of compute invested into training. To this end, we deploy an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 8 epochs. We stop the training once the validation loss did not improve for two consecutive learning rates. For training sets up to 10000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10.
For training set sizes up to 10000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1500 GPU hours for the experiments in Fig. 6.
Results and discussion. For each training set size, Fig. 6(a) shows the reconstruction performance in PSNR of the best model over all simulated network sizes in Fig. 6(b). Since the curves per training set size in Fig. 6(b) are relatively flat, further scaling up the network size is not expected to significantly improve the performance on the studied training sets. Here are the two main findings:
A linear power law with a scaling coefficient α = 0.0075 holds roughly up to training set sizes of about 30k images, and for training set sizes starting from 60k this slows to a linear power law with significantly smaller scaling coefficient α = 0.0029. With this slowed scaling law a training set size of 1.6B images would be required to increase performance by another 1dB (assuming the relation 32.05N0.0029 persists, which is likely to slow down even further).
While slowing already at a few tens of thousands of training images, the scaling laws for superresolution do not slow as early as those for denoising and compressive sensing (see Fig.1). This could be partially due training on image patches of size 128× 128 as opposed to size 256× 256 used for denoising.
D ADDITIONAL EMPIRICAL SCALING LAWS FOR GAUSSIAN DENOISING
In this Section, we extend our results for Gaussian denoising with a U-Net from Section 3 with two additional setups. Section D.1 considers fixing the noise sampled in each training epoch and Section D.2 investigates reducing the noise level from σz = 25 to σz = 15.
D.1 EMPIRICAL SCALING LAWS FOR DENOISING WITH FIXED NOISE
Our main results for denoising with a U-Net trained end-to-end discussed in Section 3 follow a setup in which the noise per training example is re-sampled in every training epoch. We choose this setup since it makes the best use of the available clean images. However, fixing the noise is also interesting since it is closer to a denoising setup in which the noise statistics are unknown, which is the case in some real-world noise removal problems. In such a problem, we would be given pairs of noisy and clean image, and could not synthesis new noisy images from the clean images.
In this section, we follow the same experimental setup from Appx. A.1 to simulate the performance of a U-Net for denoising with fixed noise for up to 100k training images (see Fig. 7). Compared to re-sampling the noise, we observe a drop in performance of about 0.3dB for small training set sizes. The performance difference at 10k images is reduced to 0.2dB resulting in a slightly steeper scaling law for moderate training set sizes. However, at around 10k training images the scaling of the performance of training with fixed noise also starts to flatten as it approaches the performance of re-sampling the noise during training. This indicates that if the noise statistics are unknown, more data is required to achieve the same performance as when they are known. However, in both cases the scaling with training set size slows already down at moderate training set sizes.
D.2 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER NOISE LEVEL
Our results for Gaussian denoising in Section 3 are for a fixed noise level of σz = 25. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with smaller noise level of σz = 15, in order to see how the scaling laws change. The results for both noise levels are depicted in Fig. 8.
We observe an improvement of about 2.3dB in PSNR, which is expected since the irreducible error decreases for smaller noise levels. We also observe that the scaling coefficient for the smaller noise level σz = 15 (i.e., α = 0.0026) is slightly steeper than that for the larger noise level σz = 25 (i.e., α = 0.0019). This coincides with the qualitative behavior of the curves for subspace denoising in Figure 3. Apart from that the curves are qualitatively similar, in that a initially steep power law is replaced by a slower one at around 6000 training images.
D.3 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER PATCH SIZE
Our results for Gaussian denoising in Section 3 with a U-Net were obtained for a constant training patch size of 256× 256 pixels across all network and training set sizes. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with a smaller patch size of 128× 128 pixels. The results for both patch sizes are depicted in Fig. 9. We observe that in the regime of large training set sizes, that we are primarily interested in, training onN patches of size 256×256 is more beneficial than training on 4N patches of size 128× 128. We therefore focus on patch size 256× 256 in the main body of this work.
E UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY - SUPPLEMENTARY RESULTS
In this section, we provide additional details on the statements in Section 5 on understanding scaling laws for denoising theoretically by studying a linear subspace denoising problem theoretically, and provide additional numerical results.
Recall that we consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error (normalized by the latent signal dimension d):
R(W) = 1 d E [ ∥Wy − x∥22 ] = 1
d ∥(W − I)U∥2F + σ2z d ∥W∥2F . (1)
Above, expectation is over the joint distribution of (x,y), and the second equality follows from using that x = Uc, where c ∼ N (0, I) is Gaussian, and y = x + z, where the noise z ∼ N (0, σ2zI) is Gaussian.
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk defined in equation (1)) is given by W∗ = 11+σ2z UU
T . This follows from taking the gradient of the risk (1), setting it to zero, and solving for W. The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
Early-stopped empirical risk minimization. We consider the estimator that applies gradient descent to the empirical risk
L(W) = N∑ i=1 ∥Wyi − xi∥22 = ∥WY −X∥ 2 F , (2)
where X,Y ∈ Rn×N contain the training examples as columns, and early-stops after k iterations for regularization.
We next discuss the early-stopped estimator Wk in more detail. Fig. 10 numerically demonstrates the regularizing effect of early stopping gradient descent, where W∞ = XY† is the converged learned estimator (see Appx. G, Eq. (5)). We see that regularization is necessary for this estimator to perform well.
We next discuss Theorem 2 and the associated assumptions in more detail. Theorem considers the following regime: (i) the number of training examples obeys (d + nσ2z) log(n) ≤ N and (ii) N ≤ ξd/σ2z , for an arbitrary ξ, and (iii) N log(N) ≤ n. For this regime, the theorem guarantees that the risk of the optimally early-stopped estimator obeys, with high probability,
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N .
(3) The theorem looks similar to that for the PCA estimate (Theorem 1), in that the risk is a constant away from the optimal risk, with an error term that becomes small as (d+ nσ2z)/N becomes small. However, the error bound does not converge to R(W∗) as the number of training examples, N , converges to infinity. This is probably an artifact of our analysis, but it is unclear, at least to us, how to derive a substantially tighter bound. In our analysis (see appendix G), we balance two errors: One error decreases in k and is associated with the part of the signal projected into the subspace, and the second error increases with k and is associated with the orthogonal complement of the subspace. We choose the early-stopping time to optimally balance those two terms, which yields the stated bound (3).
Now with regards to the assumption: Assumption (iii) N log(N) ≤ n means we are in the highdimensional regime; we think this is somewhat closer to reality (for example for denoising a 512×512 image, this would require the number of training examples to be smaller than 250k), but we can derive an analogous bound for the regime N log(N) ≥ n, where the number of training examples is larger than the ambient dimension.
Assumption (i) (d+ nσ2z) log(n) ≤ N is relatively mild, as it is necessary to being able to somewhat accurately estimate the subspace; this assumption is also required for the PCA estimate.
Assumption (ii) N ≤ ξd/σ2z , for an arbitrary ξ, is not restrictive in that ξ can be arbitrarily large, we make this assumption only so that the theorem can be stated in a convenient way. However, assumption (ii) reveals a shortcoming of Theorem 2 which is that we cannot make the bound go to zero as N → ∞, since increasing ξ increases one term in the bound, and decreases another one.
E.1 ADDITIONAL NUMERICAL SIMULATIONS
In this Section we provide further numerical simulations for the PCA subspace estimator and the estimator learned with early stopped gradient descent discussed in Section 5, Theorem 1 and Theorem2. Similar to Fig. 3, Fig. 11 shows the risks R(Wkopt) and R(WPCA) as a function of the number of training examples N for varying values of the signal and ambient dimension d and n, while fixing all other model parameters. In the power law region we fit linear power laws with negative scaling coefficients α. We observe steeper power laws (larger |α|) for smaller ambient dimensions n and larger signal dimensions d. Also the scaling coefficients of the learned estimator consistently excel the coefficients from the PCA estimator.
F PROOF FOR THEOREM 1: RISK BOUND FOR PCA SUBSPACE ESTIMATION
We provide a bound on the risk of the estimator f(y) = WPCAy with WPCA = τÛÛT and τ = 11+σ2z . Recall that Û ∈ Rn×d contains the singular vectors corresponding to the d-leading
singular values of YYT . We define Û⊥ ∈ Rn×n−d as the orthogonal complement of Û. Starting from the risk expression given in equation (1) we obtain
R(WPCA) = 1
d ∥∥∥(τÛÛT − I)U∥∥∥2 F + 1 d τ2σ2z ∥∥∥ÛÛT∥∥∥2 F
= 1
d ∥∥∥((τ − 1)ÛÛT + Û⊥ÛT⊥)U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2 ∥∥∥ÛTU∥∥∥2 F + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2
( d− ∥∥∥ÛT⊥U∥∥∥2 F ) + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= (1− (τ − 1)2)1 d ∥∥∥ÛT⊥U∥∥∥2 F + (τ − 1)2 + σ2zτ2 = 1 + 2σ2z (1 + σ2z) 2 1 d ∥∥∥ÛT⊥U∥∥∥2 F + σ2z 1 + σ2z
(i) ≤ 1 + 2σ 2 z
(1 + σ2z) 2 ∥∥∥ÛT⊥U∥∥∥2 + σ2z1 + σ2z (ii)
≤ c 1 + 2σ 2 z
(1 + σ2z) 2
(d+ nσ2z) log(2n)
N + σ2z 1 + σ2z
≤ c (d+ nσ 2 z) log(n)
N + σ2z 1 + σ2z . (4)
Here, inequality (ii) follows from Section H.1 equation (34) and holds in the regime (d + nσ2z) log(n) ≤ N , for some constant c and with probability at least 1 − n−10 − 3e−d + e−n. This concludes the proof.
G PROOF OF THEOREM 2: RISK BOUND FOR EARLY STOPPED EMPIRICAL RISK MINIMIZATION
This section contains the proof of Theorem 2. The theorem characterizes the performance of the learned, linear estimator Wk that is obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W) in equation (2). We start by deriving a closed form expression for the estimator Wk. The gradient of the loss is
∇WL(W) = (WY −X)YT ,
and thus the iterations of gradient descent are
Wk+1 = Wk − η(WkY −X)YT
= Wk(I− ηYYT ) + ηXYT .
Let Y = UyΣyVTy ∈ Rn×N and X = UxΣxVTx ∈ Rn×N be the singular value decompositions of Y and X respectively and assume that the singular values are non-zero and descending, i.e., σy,1 ≥ σy,2 ≥ . . . . We have, with W0 = 0 and k ≥ 1 that
Wk = ηXYT k−1∑ ℓ=0 (I− ηYYT )ℓ
= ηXVyΣyU T y ( k−1∑ ℓ=0 (I− ηUyΣ2yUTy )ℓ )
= ηXVyΣydiag ( k−1∑ ℓ=0 (1− ησ2y,i)ℓ ) UTy
= XVyDkU T y ,
where we defined Dk ∈ RN×N as a diagonal matrix with i-th diagonal entry given by (1 − (1 − ησ2y,i) k)/σy,i and where we used the geometric series to obtain
k−1∑ ℓ=0 (1− ησ2y,i)ℓ = 1− (1− ησ2y,i)k ησ2y,i .
Note that for k → ∞ and choosing η such that 1− ησ2y,i < 1 for all i we get
W∞ = XVyΣy −1UTy = XY †. (5)
Evaluating the risk from equation (1) at the estimator Wk gives
R(Wk) = 1
d ∥∥(XVyDkUTy − I)U∥∥2F + σ2zd ∥∥XVyDkUTy ∥∥2F . (6) To shorten notation we define
γ := (d+ nσ2z) log(n)
N (7)
ψ := nσ2z log(n)
N . (8)
We next provide bounds for the two terms on the right-hand-side of equation (6), proven later in this section.
Bound on the first term in equation (6): In Section G.1 we show that provided N log(N) ≤ n, 9d ≤ N and (d+ nσ2z) log(n) ≤ N , for some constant c, with probability at least 1− 2e−N/18 − 3n−10 − 3e−d − e−n − e−N − 2e−n/2, the following bound holds:
1
d ∥∥(XVyDkUTy − I)U∥∥2F ≤ cd (σ2zn+ γσ2x,max) d∑
i=1
1
σ2y,i (1− (1− ησ2y,i)k)2 +
c
d d∑ i=1 (1− ησ2y,i)2k
+ c
d ψγ N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2 + cγ. (9)
Bound on the second term in equation (6): In Section G.2 we show
σ2z d ∥∥XVyDkUTy ∥∥2F ≤ σ2zd N∑ i=1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (10)
With equations (9) and (10) in place and by splitting up the sum in (10) we can bound the right hand side of equation (6) as
R(Wk) ≤ 1 d
( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) d∑ i=1 1 σ2y,i (1− (1− ησ2y,i)k)2 + c d d∑ i=1 (1− ησ2y,i)2k
+ cγ + c
d
( ψγ + σ2z ) N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (11)
Since the singular values of Y are in descending order σy,1 ≥ σy,2 ≥ . . ., this is, for any iteration k, bounded as
R(Wk) ≤ ( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) 1 σ2y,d + c(1− ησ2y,d)2k
+ cγ + c
d
( ψγ + σ2z ) σ2x,max N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2. (12)
This is a good bound if we’re at an iteration sufficiently large so that (1− ησ2y,1)k is small.
In (12) we have (1− ησ2y,d)2k that is decreasing in the number of gradient descent steps k and also decreasing in the stepsize η as long as ησ2y,i ≤ 1 for i = 1, . . . , d. Further, we have (1−(1−ησ2y,i)k)2 that is increasing in k and also increasing in η. The first term corresponds to the signal that we want to fit sufficiently well, whereas the second term corresponds to the noise from which we want to fit as little as possible. Hence, there exist optimal choices for k, η that trade-off the sum of the two terms.
In our setup the d leading singular values of Y corresponding to the signal are large and concentrate around N(1 + σ2z), while the remaining singular values are small and concentrate around Nσ 2 z . Hence, we can apply a single step of gradient descent k = 1 to already fit a large portion of the signal, while minimizing the portion of the noise that is fitted. For that we choose the stepsize η as large as possible such that ησ2y,i ≤ 1 for i = 1, . . . , d still holds. Next, suppose the following events hold
E1 = {σ2y,d+1 ≤ N ( σ2z + ϵ(1 + σ 2 z) ) } (13)
E2 = {σ2y,d ≥ N(1 + σ2z)(1− ϵ)} (14) E3 = {σ2x,max ≤ 4N} (15) E4 = {σ2y,1 ≤ N(1 + ϵ)(1 + σ2z)}. (16)
In Section H.4, we show that
P [E3] ≥ 1− 2e−N/8. (17)
In Section H.4, we also show that, provided that N ≥ 3Cϵ−2(d+ σ2zn) log n, for some constant C and ϵ ∈ (0, 1),
P [E1] ,P [E2] ,P [E4] ≥ 1− e−d − e−n − n−9. (18)
We next bound the terms in equation (12). As discussed, we set k = 1 and the stepsize as large as possible, i.e. η = 1/ ( N(1 + ϵ)(1 + σ2z) ) ≤ 1/σ2y,1, which holds on event E4. Finally, on event E2 we obtain
c(1− ησ2y,d)2k ≤ c(1− ηN(1 + σ2z)(1− ϵ))2 = c ( 1− 1− ϵ
1 + ϵ
)2 ≤ cϵ2.
Next we bound the sum in equation (12). Towards this goal, we upper bound each term in the sum with its linear approximation at the origin. We compute the derivative at the origin as
lim q→0
∂
∂q
1 q (1− (1− ηq)k)2 = (ηk)2.
Thus, we have
N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2 ≤ N∑ i=d+1 k2η2σ2y,i ≤ Nk2η2σ2y,d+1 ≤ σ2z + ϵ+ ϵσ 2 z (1 + ϵ)2(1 + σ2z) 2 ≤ c(σ2z + 2ϵ),
on the event E1 and for σ2z ≤ 1, k = 1 and η = 1/ ( N(1 + ϵ)(1 + σ2z) ) . Putting this together and on the events E2, E3 we get the bound
R(Wk) ≤ 8 σ 2 z
1 + σ2z + cγ + c(1− ησ2y,d)2k +
c
d
( ψγ + σ2z ) N N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2
≤ 8 σ 2 z
1 + σ2z + cγ + cϵ2 +
cN
d
( σ2zn log n
N
(d+ nσ2z) log n
N + σ2z
) (σ2z + 2ϵ)
≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + cϵ2 + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
) (σ2z + 2ϵ).
(19)
The bound holds provided 3Cϵ−2(d+ σ2zn) log n ≤ N and N log(N) ≤ n, for some constants c, C and with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2. To maximize the benefit from early stopping we set ϵ as small as possible with respect to the condition 3Cϵ−2(d+ σ2zn) log n ≤ N
ϵ =
√ 3C(d+ σ2zn) log n
N . (20)
With that equation (19) becomes
R(Wk) ≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
)( σ2z + √ (d+ σ2zn) log n
N
)
= 8R(W | 1. What is the focus of the paper regarding training set size and its impact on imaging applications?
2. What are the strengths of the proposed approach, particularly in terms of its ability to navigate designing the amount of training data for imaging applications?
3. What are the weaknesses of the paper, especially regarding its explanation and motivation of the reason behind the performance difference between imaging applications and NLP?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates the impact of training set size for imaging applications. The paper shows that increasing training size will not yield similar gain in terms of performance. Quantitative experimental results conducted on image denoising and reconstruction aligns with the claims.
Strengths And Weaknesses
Strengths:
The paper shows that deep learning-based imaging tasks does not require immense amount of data unlike other domains such as NLP.
The experimental results can navigate designing the amount of training data for imaging applications
Weaknesses:
Authors motivate their study by contrasting with another domain such as NLP. While it is true that imaging applications require far less data compared to NLP, authors do not really explain/motivate the reason behind it. Authors try to explain the performance in terms of network size and training size. However, in imaging applications, every pixel in an image can be considered as a sample. Thus, the available amount of data is scaled exponentially. Indeed, in many recent denoising applications, a set of pixels are used as a label rather than entire image (e.g. Noise2Void, Noise2Self, …). In imaging applications such as image denoising or reconstruction, a moderate performance can be achieved with even 1 sample/image (e.g., self2self). In this perspective, in imaging applications, available data for the task of interest is much larger than the number of images.
In denoising applications, quantitative results might be used as a metric to back-up the results. However, this cannot be the only metric for MRI reconstruction shown in the study. A high SSIM does not necessarily translates to an artifact-free reconstruction. A method can achieve a very high SSIM while suffering from artifacts (e.g., Muckley et al. 2021). Thus, in MRI or similar medical imaging applications, the amount of training data can be crucial in removing coherent artifacts which worsen with higher acceleration rates. Thus, I find the claims in section 4 questionable. Extension to such domains could be a separate study that should be supported with extensive qualitative results.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written with moderate contributions. |
ICLR | Title
Scaling Laws For Deep Learning Based Image Reconstruction
Abstract
Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems. Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains. In this work, we study whether major performance gains are expected from scaling up the training set size. We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size. For all three tasks we find that an initially steep powerlaw scaling slows significantly already at moderate training set sizes. Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance. To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent. The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
1 INTRODUCTION
Deep neural networks trained to map a noisy measurement of an image or a noisy image to a clean image give state-of-the-art (SOTA) performance for image reconstruction problems. Examples are image denoising (Burger et al., 2012; Zhang et al., 2017; Brooks et al., 2019; Liang et al., 2021), super-resolution (Dong et al., 2016; Ledig et al., 2017; Liang et al., 2021), and compressive sensing for computed tomography (CT) (Jin et al., 2017), and accelerated magnetic resonance imaging (MRI) (Zbontar et al., 2018; Sriram et al., 2020; Muckley et al., 2021; Fabian & Soltanolkotabi, 2022).
The performance of a neural network for imaging is determined by the network architecture and optimization, the size of the network, and the size and quality of the training set.
Significant work has been invested in architecture development. For example, in the field of accelerated MRI, networks started out as convolutional neural networks (CNN) (Wang et al., 2016), which are now often used as building blocks in un-rolled variational networks (Hammernik et al., 2018; Sriram et al., 2020). Most recently, transformers have been adapted to image reconstruction (Lin & Heckel, 2022; Huang et al., 2022; Fabian & Soltanolkotabi, 2022).
However, it is not clear how substantial the latest improvements through architecture design are compared to potential improvements expected by scaling the training set and network size. Contrary to natural language processing (NLP) models and modern image classifiers that are trained on billions of examples, networks for image reconstruction are only trained on hundreds to thousands of example images. For example, the training set of SwinIR (Liang et al., 2021), the current SOTA for image denoising, contains only 10k images, and the popular benchmark dataset for accelerated MRI consists only of 35k images (Zbontar et al., 2018).
In this work, we study whether neural networks for image reconstruction only require moderate amounts of data to reach their peak performance, or whether major boosts are expected from increasing
the training set size. To partially address this question, we focus on three problems: image denoising, reconstruction from few and noisy measurements (compressive sensing) in the context of accelerated MRI, and super-resolution. We pick Gaussian denoising for its practical importance and since it can serve as a building block to solve more general image reconstruction problems well (Venkatakrishnan et al., 2013). We pick MR reconstruction because it is an important instance of a compressive sensing problem, and many problems can be formulated as compressive sensing problems, for example super-resolution and in-painting. In addition, for MR reconstruction the question on how much data is needed is particularly important, since it is expensive to collect medical data.
For the three problems we identify scaling laws that describe the reconstruction quality as a function of the training set size, while simultaneously scaling network sizes. Such scaling laws have been established for NLP and classification tasks, as discussed below, but not for image reconstruction.
The experiments are conducted with a U-Net (Ronneberger et al., 2015) and the SOTA SwinIR (Liang et al., 2021), a transformer architecture. We primarily consider the U-Net since it is widely used for image reconstruction and acts as a building block in SOTA models for image denoising (Brooks et al., 2019; Gurrola-Ramos et al., 2021; Zhang et al., 2021; 2022) and accelerated MRI (Zbontar et al., 2018; Sriram et al., 2020). We also present results for denoising with the SwinIR. The SwinIR outperforms the U-Net, but we find that its scaling with the number of training examples does not differ notably. Our contributions are as follows:
• Empirical scaling laws for denoising. We train U-Nets of sizes 0.1M to 46.5M parameters with training set sizes from 100 to 100k images from the ImageNet dataset (Russakovsky et al., 2015) for Gaussian denoising with 20.17dB Peak-Signal-to-Noise ratio (PSNR). While for the largest training set sizes and network sizes we consider, performance continues to increase, the rate of performance increase for training set sizes beyond a few thousand images slows to a level indicating that even training on millions of images only yields a marginal benefit, see Fig. 1(a).
We also train SOTA SwinIRs of sizes 4M to 129M parameters on the same range of training set sizes, see Fig. 2(a). Albeit it performs better than the U-Net, its scaling behavior is essentially equivalent, and again after a few thousand images, only marginal benefits are
expected. Scaling up its training set and network size did, however, give benefits, the largest model we trained yields new SOTA results on four common test sets by 0.05 to 0.22dB.
• Empirical scaling laws for compressive sensing. We train U-Nets of sizes from 2M to 500M parameters for 4x accelerated MRI with training set sizes from 50 to 50k images from the fastMRI dataset (Knoll et al., 2020b). We again find that beyond a dataset size of about 2.5k, the rate of improvement as a function of the training set size considerably slows, see Fig. 1(c). This indicates that while models are expected to improve by increasing the training set size, we expect that training on millions of images does not significantly improves performance.
• Empirical scaling laws for super-resolution. We also briefly study super-resolution, and similarly as for denoising and compressive sensing find a slowing of the scaling law at moderate dataset sizes. See appendix C.
• Understanding scaling laws for denoising theoretically. Our empirical results indicate that the denoising performance of a neural network trained end-to-end doesn’t increase as a function of training examples beyond a certain point. This is expected since once we reach a noise specific error floor, more data is not beneficial. We make this intuition precise for a linear estimator learned with early stopped gradient descent to denoise data drawn from a d-dimensional linear subspace. We show that the reconstruction error is upper bounded by d/N plus a noise-dependent error floor level. Once the error induced by learning the signal model, d/N , is small relative to the error floor, more training examples N are not beneficial.
Together, our empirical results show that neural networks for denoising, compressive sensing, and super-resolution applied in typical setups (i.e, Gaussian denoising with 20.17dB PSNR, and multi-coil accelerated MRI with 4x acceleration) already operate in a regime where the scaling laws for the training set size are slowing significantly and thus even very large increases of the training data are not expected to improve performance substantially. Even relatively modest model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
2 RELATED WORK
Scaling laws for prediction problems. Under the umbrella of statistical learning theory convergence rates of 1/N or 1/ √ N have been established for a range of relatively simple models and distributions, see e.g. Wainwright (2019) for an overview.
For deep neural networks used in practice, a recent line of work has empirically characterized the performance as a function of training set size and/or network size for classification and NLP (Hestness et al., 2017; Rosenfeld et al., 2019; Kaplan et al., 2020; Bahri et al., 2021; Zhai et al., 2021; Ghorbani et al., 2021; Bansal et al., 2022). In those domains, the scaling laws persist even for very large datasets, as described in more detail below. In contrast, for image reconstruction, we find that the power-law behavior already slows considerably at relatively small numbers of training examples.
Rosenfeld et al. (2019) find power-law scaling of performance with training set and network size across models and datasets for language modeling and classification. However, because the work fixes either training set or network size, while scaling the other, the scaling laws span only moderate ranges before saturating at a level determined by the fixed quantity and not the problem specific error floor. The papers Kaplan et al. (2020); Bahri et al. (2021); Zhai et al. (2021); Ghorbani et al. (2021); Bansal et al. (2022) including ours scale the dataset size and model size simultaneously, resulting in an improved predictive power of the obtained scaling laws.
Hestness et al. (2017) study models for language and image classification, and attribute deviations from a power-law curve to a lack of fine-tuning the hyperparameters of very large networks. For transformer language models Kaplan et al. (2020) find no deviation from a power-law for up to a training set and network size of 1B images and parameters. Further, Zhai et al. (2021) find the performance for Vision Transformers (Dosovitskiy et al., 2020) for few-shot image classification to deviate from a power-law curve only at extreme model sizes of 0.3-1B parameters and 3B images.
The role of training set size in inverse problems. For image reconstruction and inverse problems in general, we are not aware of work studying scaling laws in a principled manner, covering different
problems and model architectures. But, when proposing a new method several works study performance as a function of training set and/or network size. However, those studies typically only scale the parameter of interest while fixing all other parameters, unlike our work, which scales dataset size and network size together, which is important for identifying scaling laws. Below, we review the training set sizes, and if available their scaling properties, used by the recent SOTA in image denoising and accelerated MRI.
Zhang et al. (2017) report that for DnCNN, a standard CNN, using more than 400 distinct images with data augmentation only yields negligible improvements. Chen et al. (2021) pre-train an image processing transformer (IPT) of 115.5M network parameters on ImageNet (1.1M distinct images), and report the performance after fine-tuning to a specific task as a function of the size of the pretraining dataset. IPT’s performance for denoising is surpassed by the latest SOTA in form of the CNN based DRUnet (Chen et al., 2021), the transformer based SwinIR (Liang et al., 2021) and the Swin-Conv-Unet (Zhang et al., 2022) a combination of the two. Those models have significantly fewer network parameters and were trained on a training set consisting of only ∼10k images, leaving the role of training set size in image denoising open.
The number of available training images for accelerated MRI is limited to few publicly available datasets. Hence, current SOTA (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022) are trained on the largest datasets available (the fastMRI dataset), consisting of 35k images for knee and 70k for brain reconstruction (Zbontar et al., 2018; Muckley et al., 2021).
Adaptations of the MLP-Mixer (Tolstikhin et al., 2021) and the Vision Transformer (Dosovitskiy et al., 2020) to MRI (Mansour et al., 2022; Lin & Heckel, 2022) ablate their performance as a function of the training set size and find that non-CNN based approaches require more data to achieve a similar performance, but potentially also benefit more from even larger datasets. However, those experiments are run with fixed model sizes and only 3-4 realizations of training set size thus it is unclear when performance saturates.
3 EMPIRICAL SCALING LAWS FOR DENOISING
In this section, we consider the problem of estimating an image x from a noisy observation y = x+z, where z is additive Gaussian noise z ∼ N (0, σ2zI). Neural networks trained end-to-end to map a noisy observation to a clean image perform best, and outperform classical approaches like BM3D (Dabov et al., 2007) and non-local means (Buades et al., 2005) that are not based on training data.
Datasets. To enable studying learned denoisers over a wide range of training set sizes we work with the ImageNet dataset (Russakovsky et al., 2015). Its training set contains 1.3M images of 1000 different classes. We reserve 20 random classes for validation and testing. We design 10 training set subsets SN of sizes N ∈ [100, 100000] (see Fig. 1(a)) and with Si ⊆ Sj for i ≤ j. To make the distributions of images between the subsets as homogeneous as possible, the number of images from
different classes within a subset differ by at most one. We generate noisy images by adding Gaussian noise with variance of σz = 25 (pixels in the range from 0 to 255), i.e. 20.17dB in PSNR.
Model variants and training. We train a CNN-based U-Net, a detailed description of the model architecture is in Appx. A.1. We also train the Swin-Transformer (Liu et al., 2021) based SwinIR model (Liang et al., 2021) that achieves SOTA for a variety of image reconstruction tasks, including denoising. This model is interesting since transformers scale well with the number of training examples for other domains, e.g., for image classification pre-training (Dosovitskiy et al., 2020).
For U-Net and SwinIR we vary the number of network parameters as depicted in Fig. 1 (b) and Fig. 2 (b) respectively. Appx. A.1 and A.3 contain detailed descriptions on how the models are trained and the network size is adapted.
Results and discussion. Fig. 1(a) and Fig. 2(a) show the reconstruction performances of the best U-Net and SwinIR respectively over all considered network sizes. Our main findings are as follows.
A training set size of around 100 is already sufficient to train a decent image denoiser. Beyond 100, we can fit a linear power law to the performance of the U-Net with a scaling coefficient α = 0.0048 that approximately holds up to training set sizes of about 6k images. Beyond 6k, we can fit a second linear power law with significantly smaller scaling coefficient α = 0.0019.
Similarly, for the SwinIR we can fit a power law with coefficient α = 0.0050 in the regime of little training data and a power law with significantly smaller coefficient α = 0.0017 in the regime of moderate training data, thus the two architectures scale essentially equivalently.
While denoising benefits from more training examples, the drop in scaling coefficient indicates that scaling from thousands to millions of images is only expected to yield relatively marginal performance gains. While the gains are small and we expect gains from further scaling to be even smaller, our largest SwinIR trained on 100k images yields a new SOTA for Gaussian color image denoising. On four common test sets it achieves an improvement between 0.05 to 0.22dB, see Appx. B. Visualizations of reconstructions from models along the curves in Fig. 1(a) and Fig. 2(a) can be found in Fig. 4, Appx. A.
Comparing the U-Net scaling to SwinIR scaling (Fig. 2(a)) we see that the effect of large training sets on the performance of the U-Net can not make up for the improved modeling error that stems from the superior network architecture of the SwinIR. In fact, the simulated scaling coefficient α = 0.0019 predicts a required training set size of about 150M images for U-Net to achieve the best performance of the SwinIR, and that is an optimistic prediction since it assumes that the scaling does not slow down further for another 3 orders of magnitude. Thus, model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
An interesting question is whether different noise levels lead to different data requirements, as indicated by a different scaling behavior. We investigate the performance of U-Net for a smaller noise variance σz = 15 in Appx. D.2. While the results are still preliminary, the qualitative scaling behavior is similar; the difference are that the scaling law pertaining to smaller noise is slightly steeper, and overall the performance improves by about 1.3dB in PSNR.
Finally, note that for our results we choose to re-sample the noise per training image in every training epoch, since it makes the best use of the available clean images. However, fixing the noise is also interesting, since it is more similar to a setup in which the noise statistics are unknown like in real-world noise removal. See Appx. D.1 for additional results where the noise is fixed. We found that compared to re-sampling the noise the performance of a U-Net drops by 0.3dB for small training set sizes. As the training set size increases the performance approaches the one of re-sampling the noise resulting in a steeper scaling law that flattens at slightly larger training set sizes.
Robustness of our finding of slowing scaling laws. Our main finding for denoising is that the scaling laws slow at relatively small amounts of training data points (i.e., the scaling coefficient becomes very small). We next argue why we think that this is a robust finding.
In principle it could be that the networks are not sufficiently large for the performance to improve further as the dataset size increases. However, for the U-Net and training set sizes of 10k and 30k the
performance increase already slows, and for those training set sizes, increasing the network size does not improve performance, as shown in Fig. 1(b). For smaller network sizes the curves in Fig. 1(b) first increase and then decrease indicating that we found network sizes close to the optimum.
The parameter scaling of the SwinIR in Fig. 2(b) indicates that for training set sizes of 10k and 100k the performance still benefits from larger networks. Hence, the power law for those training set sizes in Fig. 2(a) can become slightly steeper once we further increase the network size. However, for the slope of the flat power law in (a) to reach the slope of the steep power law, the parameter scaling in (b) for 100k training images would need to hold for another 2 orders of magnitude, i.e. about 12B instead of the currently used 129M network parameters, which is very unlikely considering that the current network sizes already suffice to saturate the performance for small/moderate training set sizes.
Finally, it could be that with higher quality or higher diversity of the data, the denoising performance would further improve. In our experiments, we increase the training set sizes by adding more images from the same classes of ImageNet, and we continue to test on different classes. Thus, it could be that adding training examples from a different, more diverse data source leads to less of a slowing of the scaling law in Fig. 1(a). We test this hypothesis by scaling our training set with 3k images to 10k images not by adding more images from ImageNet but images from the datasets that are used to train the original SwinIR, i.e., we add all images from DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017) and BSD500 (Arbeláez et al., 2011) and add the remaining images from WED (Ma et al., 2017). Keeping all other hyperparameters fixed the U-Net obtains for this dataset a PSNR of 32.239dB, which is slightly worse than the 32.252dB it achieves with our training set with 10k images only from ImageNet. Hence, we reject the hypothesis that the drop in scaling coefficients can be explained with how the training sets are designed in our experiment.
4 EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING
We consider compressive sensing (CS) to achieve 4x accelerated MRI. MRI is an important medical imaging technique for its non-invasiveness and high accuracy. Due to the physics of the measurement process, MRI is inherently slow. Accelerated MRI aims at significantly reducing the scan time by undersampling the measurements. In addition, MRI scanners rely on parallel imaging, which uses multiple receiver coils to simultaneously collect different measurements of the same object.
This leads to the following reconstruction problem. We are given measurements yi ∈ Cm of the image x ∈ Cn as yi = MFSix+ noisei for i = 1, ..., C, and our goal is to reconstruct the image from those measurements. Here, C is the number of receiver coils, the diagonal matrices Si ∈ Cn×n are the sensitivity maps modelling the signal strength perceived by the i-th coil, F ∈ Cn×n is the discrete Fourier transform (DFT), and M ∈ Cm×n is a binary mask implementing the undersampling. Classical CS approaches (Lustig et al., 2008) first estimate the sensitivity maps with a method such as ESPIRiT (Uecker et al., 2014), and second estimate the unknown image by solving a regularized optimization problem, such as total-variation norm minimization. Recently, deep learning based methods have been shown to significantly outperform classical CS methods due to their ability to learn more complex and accurate signal models from data (Knoll et al., 2020a; Muckley et al., 2021).
Datasets. To explore the performance of learning based MR reconstruction over a wide range of training set sizes, we use the fastMRI multi-coil brain dataset (Knoll et al., 2020b), which is the largest publicly available dataset for MRI. The dataset consists of images of different contrasts. To ensure that the statistics of the dataset are as homogeneous as possible, we take the subset of the dataset corresponding to a single contrast resulting in 50k training images. We design training set subsets SN of size N ∈ [50, 50000] (see Fig. 1 (c)) with Si ⊆ Sj for i ≤ j. For more information on selection, division and subsampling of the dataset see Appx. A.2.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with 4 blocks per encoder/decoder. The network is trained end-to-end to map a coarse reconstruction x̂CR to the ground truth image x. The coarse reconstruction is obtained as
x̂CR = (∑C i=1 |F−1yi,ZF|2 )1/2
,where F−1 is the inverse DFT, and yi,ZF are the undersampled measurements of the i-th coil, where missing entries are filled with zeros. We vary the number of
network parameters as depicted in Fig. 1 (d). Appx. A.2 contains detailed descriptions on how the model is trained and the network size is adapted.
Results and discussion. For each training set size Fig. 1 (c) shows the reconstruction performance in structural similarity (SSIM) of the best model over all simulated network sizes. There are 2 main findings. First, we can fit a linear power law with a scaling coefficient α = 0.0072 that holds up to training set sizes of about 2.5k images. Second, for training set sizes starting from 5k we can fit a second linear power law but with significantly smaller scaling coefficient α = 0.0026.
Similar to denoising in Section 3 we conclude that while accelerated MRI slightly benefits form more training examples, the drop in scaling coefficient indicates that it is unlikely that even training set sizes of the order of hundreds of millions of images result in substantial gains. Visualizations of reconstructions from models along the curve in Fig. 1 (c) can be found in Fig. 5, Appx. A.2.
Robustness of our finding of slowing scaling laws. Next, we argue why the drop in scaling coefficient for accelerated MRI is a robust finding.
In Fig. 1 (d) we demonstrate that the network size does not bottleneck the performance of our largest training sets. For each training set size the performance as a function of the number of network parameters is relatively flat. Even small training sets benefit from large networks before their performance slightly decays for very large networks. Hence, we expect that for large training sets the performance as a function of network parameters would not increase significantly before decaying.
Finally, is there a different type of training data that would improve the scaling coefficient? We don’t think so, since all experiments are for the very specific task of reconstructing brain images of one particular contrast, which means that the examples that we add to increase the size of the training sets is already the data most suitable to solve this task.
5 UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY
We study a simple linear denoiser that is trained end-to-end to reconstruct a clean image from a noisy observation, in order to understand how the error as a function of the number of training examples for inverse problems is expected to look like.
We define a joint distribution over a signal and the corresponding measurement (x,y) as follows. Consider a d < n dimensional subspace of Rn parameterized by the orthonormal basis U ∈ Rn×d. We draw a signal approximately uniformly from the subspace x = Uc, where c ∼ N (0, I), and an associated noisy measurement as y = x+ z, where z ∼ N (0, σ2zI) is Gaussian noise. The subspace is unknown, but we are given a training set {(x1,y1), . . . , (xN ,yN )}, consisting of examples drawn iid from the joint distribution over (x,y).
We assume that the signal lies in a low dimensional subspace. Assuming that data lies in a lowdimensional subspace or more general, a union of low-dimensional subspaces, is common, for example it underlies the denoising of natural images via wavelet thresholding (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). Mohan et al. (2020) found that even deep learning based denoisers implicitly perform a projection onto an adaptively-selected low-dimensional subspace that captures the features of natural images.
We consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error, defined as R(W) = 1dE [ ∥Wy − x∥22 ] , where
expectation is over the joint distribution of (x,y).
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk R) is given by W∗ = 11+σ2z UU
T . The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
An estimator based on subspace estimation. The optimal estimator W∗ requires knowledge of the unknown subspace. We can learn the subspace from the noisy data by performing principal component analysis on Y = [y1, . . . ,yN ]. Specifically, we estimate the subspace as the d-leading
singular vectors of the empirical co-variance matrix YYT ∈ Rn×n, denoted by by Û ∈ Rn×d. If the measurement noise is zero (i.e., σ2z = 0), Û is an orthonormal basis for the d-dimensional subspace U provided we observe at least d many linearly independent signals, which occurs with probability one if we draw data according to the model defined above, and if the number of training examples obeys N ≥ d. There is a vast literature on PCA; see Vershynin (2011; 2018); Rudelson & Vershynin (2010); Tropp (2012) for tools from high-dimensional probability to analyze the PCA estimate.
Now consider the estimator WPCA = 11+σ2z ÛÛ Ty based on N noisy training points. The estimator assumes knowledge of the noise variance, but it is not difficult to estimate it relatively accurately. The following result, proven in Appx. F, characterizes the associated risk. Theorem 1. Suppose that the number of training examples obeys (d + nσ2z) log(n) ≤ N . For a numerical constant c, with probability at least 1− n−10 − 3e−d + e−n, the risk of the PCA-estimate is bounded by
R (WPCA) ≤ R(W∗) + c(d+ nσ2z) log(n)/N.
Thus, as long as the number of training examples N is sufficiently large relative to (d+ nσ2z), the risk of the PCA-estimator is close to the risk of the optimal estimator.
Estimator learned end-to-end. We now consider an estimator learned end-to-end, by applying gradient descent to the empirical risk L(W) = ∑N i=1 ∥Wyi − xi∥ 2 2 and we regularize via earlystopping the gradient descent iterations. The risk of the estimate after k iterations of gradient descent, Wk, is bounded by the next result. This estimator mimics the supervised training we consider throughout this section, with the difference that here we consider a simple neural network, and in the previous sections we trained a neural network. Theorem 2. Let Wk be the matrix obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W). Consider the regime where the number of training examples obeys (d + nσ2z) log(n) ≤ N ≤ ξd/σ2z for an arbitrary ξ and N log(N) ≤ n. For an appropriate choice of the stepsize η there exists an optimal early stopping time kopt at which the risk of the estimator fW(y) = Wkopty is upper-bounded with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2 by
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N ,
where c is a numerical constant.
The proof is provided in Appx. G. Similar to Thm. 1 the first term in the bound corresponds to the noise-dependent error floor and the second two terms decrease in (d + nσ2z)/N and represent the error induced by learning the estimator with a finite number of training examples. Once this error is small relative to the error floor, more training examples do not improve performance. See Appx. E for a more detailed discussion on the estimator and the assumptions of Thm. 2.
Discussion. In Fig. 3, we plot the risk as a function of the training set size for the early-stopped empirical risk minimization (ERM) and the PCA based estimator. Starting at a training set size of N = 100, we see that the risk minus the optimal risk follows approximately a power law, i.e., log(R(W)−R(W∗)) ≈ −α log(N), as suggested by Thms. 1 and 2. In practice, however, we don’t know the risk of the optimal estimator, R(W∗). Therefore, in the second row of Fig. 3 and throughout our empirical results we plot the risk log(R(W)) as a function of the number of training examples log(N) and distinguish different regions by fitting approximated scaling coefficients α to log(R(W)) ≈ −α log(N). In our theoretical example, we can identify three regions, coined in Hestness et al. (2017): The small data region in which the training set size does not suffice to learn a well-performing mapping and the power-law region in which the performance decays approximately as Nα with α < 0, and an problem-specific irreducible error region, which is R(W∗) here. Note that the power-law coefficient in the risk plot R(W) are smaller then those when plotting R(W)−R(W∗), an are only approximate scaling coefficients. Already in this highly simplified setup of an inverse problem the true scaling coefficients heavily depend on model parameters such as the noise variance as can be seen in Fig. 3. The scaling coefficients further vary with the signal dimension d and ambient dimension n (see Appx. E.1).
6 CONCLUSION
We found that the performance improvement of deep learning based image reconstruction as a function of the number of training examples slows already at moderate training set sizes, indicating that only marginal gains are expected beyond a few thousand examples.
Limitations. This finding is based on studying three different reconstruction problems (denoising, super-resolution, and compressive sensing), two architectures (U-net and SwinIR), and for each setup we extensively optimized hyperparameters and carefully scaled the networks. Scaling gave new state-of-the-art results on four denoising datasets, which provides some confidence in the setup.
Moreover, our statements necessarily pertain to the architectures and metrics we study. It is possible that other architectures or another scaling of architectures can yield larger improvements when scaling architectures. It is also widely acknowledged that image quality metrics (such as SSIM and PSNR) do not fully capture the perceived image quality by humans, and it could be that more training data yields improvements in image quality that are not captured well by SSIM and PSNR.
Most importantly, our findings pertain to standard in-distribution evaluation, i.e., the test images are from the same distribution as the training images. To achieve robustness against testing on images out of the training distribution recent work indicates a positive effect of larger and more diverse training sets (Miller et al., 2021; Darestani et al., 2022; Nguyen et al., 2022). Thus it is possible that the out-of-distribution performance of image reconstruction methods improves when trained on a larger and a more diverse dataset, even though the in-distribution performance does not improve further.
Future research. We focus on supervised image reconstruction methods. For image denoising and other image reconstruction problems self-supervised approaches are also performing very well (Laine et al., 2019; Wang et al., 2022; Zhou et al., 2022), even though supervised methods perform best if clean images are available. It would be very interesting to investigate the scaling laws for selfsupervised methods; perhaps more training examples are required to achieve the performance of a supervised setup. We leave this for future work.
REPRODUCIBILITY
The repository at https://github.com/MLI-lab/Scaling_Laws_For_Deep_ Learning_Based_Image_Reconstruction contains the code to reproduce all results in the main body of this paper.
ACKNOWLEDGMENTS
The authors acknowledge support by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456465471, 464123524, the DAAD, and the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
A DETAILS OF THE EXPERIMENTAL SETUPS
A.1 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a U-Net presented in Fig.1 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 1 (a). In these examples the improvement in perceived image quality from increasing the training set size from 100 to 1000 is larger than from increasing from 1000 to 10000 or from 10000 to 100000. This correlates with our quantitative findings in Fig. 1 (a).
Next, we describe the experimental details. We train U-Nets with two blocks in the encoder and decoder part respectively and skip connections between blocks. Each block consists of two convolutional layers with LeakyReLU activation and instance normalization (Ulyanov et al., 2017) after every layer, where the number of channels is doubled (halved) after every block in the encoder (decoder). The downsampling in the encoder is implemented as average pooling and the upsampling in the decoder as transposed convolutions. As proposed by Zhang et al. (2017) we train a residual denoiser that learns to predict y − x instead of directly predicting x, which improves performance. We scale the network size by increasing the width of the network by scaling the number of channels per (transposed) convolutional layer. For denoising we trained U-Nets of 7 different sizes. We vary the number of channels in the first layer in {16, 32, 64, 128, 192, 256, 320}, which corresponds to {0.1, 0.5, 1.9, 7.4, 16.7, 30.0, 46.5} million parameters. We do not vary the depth, since this would change the dimension of the informational bottleneck in the U-Net and thus change the model family itself.
The exact training set sizes we consider are {0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60, 100} thousand images from ImageNet. We center crop the training images to 256 × 256 pixels. We also tried a smaller patch size of 128× 128 pixels, but larger patches showed to have a better performance in the regime of large training set sizes, which is why the results presented in the main body are for patch size 256× 256. For a comparison between the scaling behavior of the two different patch sizes see Fig. 9, Appx. D.3.
We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. For validation and testing we use 80 and 300 images respectively taken from 20 random classes that are not used for training.
We use mean-squared-error loss and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999. For moderate training set sizes up to 3000 images we find that heuristically adjusting the initial learning rate with the help of an automated learning rate annealing performs well. To this end, we start with a learning rate of 10−4 and increase after every epoch by a factor of 2 until the validation loss does not improve for 3 consecutive epochs. We then load the model checkpoint from the learning rate that was still performing well and continue with that learning rate. We observe that this scheme typically picks an initial learning rate reduced by factor 2 for every increase in the number of channels C. For training set sizes larger than 3000 we directly apply this rule to pick an initial learning rate without annealing as we found this to give slightly better results. In particular, for number of channels C = {128, 192, 256, 320} we used initial learning rates η = {0.0032, 0.0016, 0.0008, 0.0004}. For small training sets up to 1000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10 and found that further increasing the batch size does not improve performance.
We do not put a limit on the amount of compute for training. We use an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 10 epochs or 6 epochs for training set sizes starting from 6000 images. Once the learning rate drops to 10−5 we observe near to no gains in validation loss and stop the training after 10 additional epochs.
For training set sizes up to 1000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1800 GPU hours for the experiments in Fig. 1(a),(b).
A.2 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for compressive sensing MRI presented in Fig.1 (c),(d) and Section 4.
In addition, Fig. 5 shows examples of reconstructions from different models along the performance curve in Fig. 1 (c). In these examples the improvement in perceived image quality from increasing the training set size from 500 to 2500 is larger than from increasing from 2500 to 10000 or from 10000 to 50000. This correlates with our quantitative findings in Fig. 1 (c).
The first in row in Fig. 5 shows an example in which all models including the one trained on the largest training set fail to recover a fine detail. This is a known problem in accelerated MRI and has been documented in both editions of the fast MRI challenge (Knoll et al., 2020a; Muckley et al., 2021) in which for all methods examples could be found in which fine details have not been recovered. However, the question remains if more training data or better models would help or the details are simply not there since the information is lost due to the large undersampling factors considered in the fastMRI challenges and in this work.
Next, we describe the experimental details. For compressed sensing in the context of accelerated MRI we trained U-Nets of 14 different sizes. We vary the number of channels in the first layer in {16, 32, 48, 64, 96, 112, 128, 144, 160, 176, 192, 208, 224, 256}, which corresponds to {2, 8, 18, 31, 70, 95, 124, 157, 193, 234, 279, 327, 380, 496} million network parameters. The exact training set sizes we consider are {0.05, 0.25, 0.5, 1, 2.5, 5, 10, 25, 50} thousand AXT2 weighted images from fastMRI multi-coil brain dataset (Zbontar et al., 2018), where AXT2 corresponds to all images of one type of contrast. We focused on images only from this type to make the statistics of our datasets as homogeneous as possible. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. We use 4732 and 730 additional images for testing and validation.
We consider an acceleration factor of 4 meaning that we only measure 25% of the information. We obtain the 4 times undersampled measurements by masking the fully sampled measurement y with an equispaced mask with 8% center fractions meaning that in the center of y we take all the measurements and take the remaining ones at equispaced intervals.
We use structural similarity (SSIM) loss and RMSprop optimizer with α = 0.99 as this is the default in the fastMRI repository (Zbontar et al., 2018) and we found no improvement by replacing it with Adam. We do not put a limit on the amount of compute invested into training. We deploy an automated learning rate decay that starts at a learning rate of 10−3 and decays by a factor 0.1 if the validation SSIM has not improved by at least 10−4 for 5 epochs. Once the learning rate drops to 10−6 we stop the training after 10 additional epochs. Only for the largest training set sizes 25k,50k we found that an additional drop to 10−7 resulted in further performance gains. We use a batch size of 1.
For training set sizes up to 5k we train three models with random seeds and pick the best. For larger training set sizes we only run one seed, since the variance between runs decreased.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 4250 GPU hours for the experiments in Fig. 1 (c),(d).
A.3 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH THE
SWINIR
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a SwinIR presented in Fig.2 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 2 (a). Despite of the best SwinIR clearly outperforming the best U-Net in terms of PSNR
it is difficult for the naked eye to notice large differences in the quality of the reconstructions. This indicates that for Gaussian denoising both models already operate in a regime, where improvements to be made are only marginal.
Next, we describe the experimental details. To obtain Fig. 2 we train the SwinIR for color denoising from Liang et al. (2021) on the same training sets mentioned in Appx. A.1 with {0.1, 0.3, 1, 10, 100} thousand images from ImageNet. Instead of center cropping the training images to 256× 256 we have to crop to 128× 128 pixels as larger input patches would make it computational infeasible for us to train large versions of the SwinIR. The largest SwinIR alone took over 2 months to train on 4 NVIDIA A40 GPUs.
We train 4 different network sizes with {3.7, 11.5, 41.8, 128.9} million parameters. We denote the four network sizes as small(S)/middle(M)/large(L)/huge(H).
The training details and network configurations are as follows. The default SwinIR for denoising (Liang et al., 2021) was proposed for a training set size of about 10k images, 11.5M network parameters and was trained with a batch size of 8 for T =1280 epochs, where the learning rate is halved at [0.5T ,0.75T , 0.875T , 0.9375T ] epochs. We keep the learning rate schedule but adjust the maximal number of epochs T according to the training set size. Table 1 shows batch size and maximal number of epochs for every experiment in Fig. 2. We did not optimize over the choice of the batch size but picked the batch size as prescribed by the availability of computational resources.
See Liang et al. (2021) for a detailed description of the SwinIR network architecture. We vary the network size by adjusting the number of residual Swin Transformer blocks, the number of Swin Transformers per block, the number of attention heads per Swin Transformer, the number of channels in the input embedding and the width of the fully connected layers in a Swin Transformer. Table 2 contains a summary of the settings. When scaling up the network size, we invested in the parameters that seemed to be most promising in the ablation studies in Liang et al. (2021).
The experiments were conducted on four NVIDIA A40 and four NVIDIA RTX A6000. We used about 13000 GPU hours for the tranformer experiments in Fig. 2. For training the models in parallel on multiple GPUs we utilize the torch.distributed package with the glow backend, instead of the faster nccl backend, which was unfortunately not available on our hardware at that time.
B BENCHMARKING OUR MODELS FOR GAUSSIAN IMAGE DENOISING
In this Section, we evaluate the models we trained for image denoising in Section 3 on four common test sets form the literature and show that the largest SwinIR trained on the largest dataset achieves new SOTA for all four test sets and the considered noise level.
In Fig. 2 in the main body we compared the performance of the U-Nets and the SwinIRs trained on subsets of ImageNet for image denoising. The models are evaluated on a test set sampled from ImageNet. We observed that while SwinIRs significantly outperform U-Nets, the performance gain from increasing the training set size slows already at moderate training set sizes for both architectures equally. However, there is still a moderat performance gain in scaling the models, and thus we expect the largest SwinIR trained on the largest dataset to outperform the original SwinIR from Liang et al. (2021). Our results in this section show that this is indeed the case.
In Table 3 we evaluate on the standard test sets for Gaussian color image denoising CBSD68 (Martin et al., 2001), Kodak24 (Franzen, 1999), McMaster (Zhang et al., 2011) and Urban100 (Huang et al., 2015). We observe a significant performance difference between the best U-Net (46.5M parameters, 100k training images) and the other transformer based methods. As expected, our largest SwinIR trained on the largest dataset SwinIR 100/H outperforms the original SwinIR, but also the SCUnet (Zhang et al., 2022) a later SOTA model that has been demonstrated to outperform the original SwinIR.
We also depict the gains for the SwinIR from just scaling up the network size and then from scaling up network size and training set size. While this led to a new SOTA for Gaussian image denoising, note that on the downside training the SwinIR 10/M, which is comparable to the original SwinIR, took about 2 weeks on 4 NVIDIA A40 gpus, while training the SwinIR 100/H took over 2 months.
C EMPIRICAL SCALING LAWS FOR IMAGE SUPER-RESOLUTION WITH A U-NET
In this Section, we consider the problem of super-resolution, i.e., estimating an high-resolution image from a low-resolution version of the image. This can be viewed as a compressive sensing problem, since we can view the super-resolution problem as reconstructing a signal x ∈ Rn from a downsampled version y = Ax, where the matrix A ∈ Rm×n implements a downsampling operation like bicubic downsampling, or blurring followed by downsampling. As first shown in the pioneering work of Dong et al. (2014) data driven neural networks trained end-to-end outperform classical model-based approaches (Gu et al., 2012; Michaeli & Irani, 2013; Timofte et al., 2013).
Dong et al. (2014) reports that for super-resolution with a simple three-layer CNN, the gains from very large training sets do not seem to be as impressive as in high-level vision problems like image classification. Liang et al. (2021) plot the super-resolution performance of the SOTA SwinIR model as a function of the training set size up to 3600 training examples for a fixed network size. When plotting their results on a logarithmic scale, we observe that the performance improvement follows a power-law. It is unclear, however, whether this power law slows beyond this relatively small number of images.
In this Section, we obtain scaling laws for super-resolution for a U-Net over a wide range of training set and network sizes, similar as for denoising and compressive sensing in the main body.
Datasets. We use the same training, validation and test sets as described in Appx. A.1 with training set sizes N ∈ [100, 100k] images from ImageNet. On top we add two larger training sets of size 300k and 600k. Instead of center cropping the training images to 256× 256 we follow the super-resolution experiments in Liang et al. (2021) and train on images cropped to 128 × 128 pixels. We consider super-resolution of factor 2 so the low-resolution images have size 64 × 64. The low-resolution images are obtained with the bicubic downsampling function of Python’s PIL.Image package.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with only one block per encoder/decoder as this resulted in slightly better results than with two blocks. The network is trained end-to-end to map a coarse reconstruction, obtained through bicubic upsampling, to the residual between the coarse reconstruction and the high-resolution ground truth image.
We vary the number of the channels in the first layer in {8, 16, 32, 64, 128, 192, 256, 320, 448, 564}, which corresponds to {0.007, 0.025, 0.10, 0.40, 1.6, 3.6, 6.4, 10.0, 20.0, 31.2} million network parameters. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples.
We use the ℓ1-loss and Adam optimizer with its default settings. For all experiments we find a good initial learning rate with the same annealing strategy as described in Appx. A.1. However, instead of picking the largest learning rate for which the validation loss does not diverge, we pick the second largest, which leads to slightly more stable results. We start the annealing with learning rate of 10−5. In the few cases, where our heuristic leads to a degenerated training curve, typically due to picking a significantly too small or too large learning rate, starting the annealing with a smaller learning rate of 10−6 resolves the problem.
We do not put a limit on the amount of compute invested into training. To this end, we deploy an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 8 epochs. We stop the training once the validation loss did not improve for two consecutive learning rates. For training sets up to 10000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10.
For training set sizes up to 10000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1500 GPU hours for the experiments in Fig. 6.
Results and discussion. For each training set size, Fig. 6(a) shows the reconstruction performance in PSNR of the best model over all simulated network sizes in Fig. 6(b). Since the curves per training set size in Fig. 6(b) are relatively flat, further scaling up the network size is not expected to significantly improve the performance on the studied training sets. Here are the two main findings:
A linear power law with a scaling coefficient α = 0.0075 holds roughly up to training set sizes of about 30k images, and for training set sizes starting from 60k this slows to a linear power law with significantly smaller scaling coefficient α = 0.0029. With this slowed scaling law a training set size of 1.6B images would be required to increase performance by another 1dB (assuming the relation 32.05N0.0029 persists, which is likely to slow down even further).
While slowing already at a few tens of thousands of training images, the scaling laws for superresolution do not slow as early as those for denoising and compressive sensing (see Fig.1). This could be partially due training on image patches of size 128× 128 as opposed to size 256× 256 used for denoising.
D ADDITIONAL EMPIRICAL SCALING LAWS FOR GAUSSIAN DENOISING
In this Section, we extend our results for Gaussian denoising with a U-Net from Section 3 with two additional setups. Section D.1 considers fixing the noise sampled in each training epoch and Section D.2 investigates reducing the noise level from σz = 25 to σz = 15.
D.1 EMPIRICAL SCALING LAWS FOR DENOISING WITH FIXED NOISE
Our main results for denoising with a U-Net trained end-to-end discussed in Section 3 follow a setup in which the noise per training example is re-sampled in every training epoch. We choose this setup since it makes the best use of the available clean images. However, fixing the noise is also interesting since it is closer to a denoising setup in which the noise statistics are unknown, which is the case in some real-world noise removal problems. In such a problem, we would be given pairs of noisy and clean image, and could not synthesis new noisy images from the clean images.
In this section, we follow the same experimental setup from Appx. A.1 to simulate the performance of a U-Net for denoising with fixed noise for up to 100k training images (see Fig. 7). Compared to re-sampling the noise, we observe a drop in performance of about 0.3dB for small training set sizes. The performance difference at 10k images is reduced to 0.2dB resulting in a slightly steeper scaling law for moderate training set sizes. However, at around 10k training images the scaling of the performance of training with fixed noise also starts to flatten as it approaches the performance of re-sampling the noise during training. This indicates that if the noise statistics are unknown, more data is required to achieve the same performance as when they are known. However, in both cases the scaling with training set size slows already down at moderate training set sizes.
D.2 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER NOISE LEVEL
Our results for Gaussian denoising in Section 3 are for a fixed noise level of σz = 25. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with smaller noise level of σz = 15, in order to see how the scaling laws change. The results for both noise levels are depicted in Fig. 8.
We observe an improvement of about 2.3dB in PSNR, which is expected since the irreducible error decreases for smaller noise levels. We also observe that the scaling coefficient for the smaller noise level σz = 15 (i.e., α = 0.0026) is slightly steeper than that for the larger noise level σz = 25 (i.e., α = 0.0019). This coincides with the qualitative behavior of the curves for subspace denoising in Figure 3. Apart from that the curves are qualitatively similar, in that a initially steep power law is replaced by a slower one at around 6000 training images.
D.3 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER PATCH SIZE
Our results for Gaussian denoising in Section 3 with a U-Net were obtained for a constant training patch size of 256× 256 pixels across all network and training set sizes. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with a smaller patch size of 128× 128 pixels. The results for both patch sizes are depicted in Fig. 9. We observe that in the regime of large training set sizes, that we are primarily interested in, training onN patches of size 256×256 is more beneficial than training on 4N patches of size 128× 128. We therefore focus on patch size 256× 256 in the main body of this work.
E UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY - SUPPLEMENTARY RESULTS
In this section, we provide additional details on the statements in Section 5 on understanding scaling laws for denoising theoretically by studying a linear subspace denoising problem theoretically, and provide additional numerical results.
Recall that we consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error (normalized by the latent signal dimension d):
R(W) = 1 d E [ ∥Wy − x∥22 ] = 1
d ∥(W − I)U∥2F + σ2z d ∥W∥2F . (1)
Above, expectation is over the joint distribution of (x,y), and the second equality follows from using that x = Uc, where c ∼ N (0, I) is Gaussian, and y = x + z, where the noise z ∼ N (0, σ2zI) is Gaussian.
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk defined in equation (1)) is given by W∗ = 11+σ2z UU
T . This follows from taking the gradient of the risk (1), setting it to zero, and solving for W. The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
Early-stopped empirical risk minimization. We consider the estimator that applies gradient descent to the empirical risk
L(W) = N∑ i=1 ∥Wyi − xi∥22 = ∥WY −X∥ 2 F , (2)
where X,Y ∈ Rn×N contain the training examples as columns, and early-stops after k iterations for regularization.
We next discuss the early-stopped estimator Wk in more detail. Fig. 10 numerically demonstrates the regularizing effect of early stopping gradient descent, where W∞ = XY† is the converged learned estimator (see Appx. G, Eq. (5)). We see that regularization is necessary for this estimator to perform well.
We next discuss Theorem 2 and the associated assumptions in more detail. Theorem considers the following regime: (i) the number of training examples obeys (d + nσ2z) log(n) ≤ N and (ii) N ≤ ξd/σ2z , for an arbitrary ξ, and (iii) N log(N) ≤ n. For this regime, the theorem guarantees that the risk of the optimally early-stopped estimator obeys, with high probability,
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N .
(3) The theorem looks similar to that for the PCA estimate (Theorem 1), in that the risk is a constant away from the optimal risk, with an error term that becomes small as (d+ nσ2z)/N becomes small. However, the error bound does not converge to R(W∗) as the number of training examples, N , converges to infinity. This is probably an artifact of our analysis, but it is unclear, at least to us, how to derive a substantially tighter bound. In our analysis (see appendix G), we balance two errors: One error decreases in k and is associated with the part of the signal projected into the subspace, and the second error increases with k and is associated with the orthogonal complement of the subspace. We choose the early-stopping time to optimally balance those two terms, which yields the stated bound (3).
Now with regards to the assumption: Assumption (iii) N log(N) ≤ n means we are in the highdimensional regime; we think this is somewhat closer to reality (for example for denoising a 512×512 image, this would require the number of training examples to be smaller than 250k), but we can derive an analogous bound for the regime N log(N) ≥ n, where the number of training examples is larger than the ambient dimension.
Assumption (i) (d+ nσ2z) log(n) ≤ N is relatively mild, as it is necessary to being able to somewhat accurately estimate the subspace; this assumption is also required for the PCA estimate.
Assumption (ii) N ≤ ξd/σ2z , for an arbitrary ξ, is not restrictive in that ξ can be arbitrarily large, we make this assumption only so that the theorem can be stated in a convenient way. However, assumption (ii) reveals a shortcoming of Theorem 2 which is that we cannot make the bound go to zero as N → ∞, since increasing ξ increases one term in the bound, and decreases another one.
E.1 ADDITIONAL NUMERICAL SIMULATIONS
In this Section we provide further numerical simulations for the PCA subspace estimator and the estimator learned with early stopped gradient descent discussed in Section 5, Theorem 1 and Theorem2. Similar to Fig. 3, Fig. 11 shows the risks R(Wkopt) and R(WPCA) as a function of the number of training examples N for varying values of the signal and ambient dimension d and n, while fixing all other model parameters. In the power law region we fit linear power laws with negative scaling coefficients α. We observe steeper power laws (larger |α|) for smaller ambient dimensions n and larger signal dimensions d. Also the scaling coefficients of the learned estimator consistently excel the coefficients from the PCA estimator.
F PROOF FOR THEOREM 1: RISK BOUND FOR PCA SUBSPACE ESTIMATION
We provide a bound on the risk of the estimator f(y) = WPCAy with WPCA = τÛÛT and τ = 11+σ2z . Recall that Û ∈ Rn×d contains the singular vectors corresponding to the d-leading
singular values of YYT . We define Û⊥ ∈ Rn×n−d as the orthogonal complement of Û. Starting from the risk expression given in equation (1) we obtain
R(WPCA) = 1
d ∥∥∥(τÛÛT − I)U∥∥∥2 F + 1 d τ2σ2z ∥∥∥ÛÛT∥∥∥2 F
= 1
d ∥∥∥((τ − 1)ÛÛT + Û⊥ÛT⊥)U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2 ∥∥∥ÛTU∥∥∥2 F + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2
( d− ∥∥∥ÛT⊥U∥∥∥2 F ) + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= (1− (τ − 1)2)1 d ∥∥∥ÛT⊥U∥∥∥2 F + (τ − 1)2 + σ2zτ2 = 1 + 2σ2z (1 + σ2z) 2 1 d ∥∥∥ÛT⊥U∥∥∥2 F + σ2z 1 + σ2z
(i) ≤ 1 + 2σ 2 z
(1 + σ2z) 2 ∥∥∥ÛT⊥U∥∥∥2 + σ2z1 + σ2z (ii)
≤ c 1 + 2σ 2 z
(1 + σ2z) 2
(d+ nσ2z) log(2n)
N + σ2z 1 + σ2z
≤ c (d+ nσ 2 z) log(n)
N + σ2z 1 + σ2z . (4)
Here, inequality (ii) follows from Section H.1 equation (34) and holds in the regime (d + nσ2z) log(n) ≤ N , for some constant c and with probability at least 1 − n−10 − 3e−d + e−n. This concludes the proof.
G PROOF OF THEOREM 2: RISK BOUND FOR EARLY STOPPED EMPIRICAL RISK MINIMIZATION
This section contains the proof of Theorem 2. The theorem characterizes the performance of the learned, linear estimator Wk that is obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W) in equation (2). We start by deriving a closed form expression for the estimator Wk. The gradient of the loss is
∇WL(W) = (WY −X)YT ,
and thus the iterations of gradient descent are
Wk+1 = Wk − η(WkY −X)YT
= Wk(I− ηYYT ) + ηXYT .
Let Y = UyΣyVTy ∈ Rn×N and X = UxΣxVTx ∈ Rn×N be the singular value decompositions of Y and X respectively and assume that the singular values are non-zero and descending, i.e., σy,1 ≥ σy,2 ≥ . . . . We have, with W0 = 0 and k ≥ 1 that
Wk = ηXYT k−1∑ ℓ=0 (I− ηYYT )ℓ
= ηXVyΣyU T y ( k−1∑ ℓ=0 (I− ηUyΣ2yUTy )ℓ )
= ηXVyΣydiag ( k−1∑ ℓ=0 (1− ησ2y,i)ℓ ) UTy
= XVyDkU T y ,
where we defined Dk ∈ RN×N as a diagonal matrix with i-th diagonal entry given by (1 − (1 − ησ2y,i) k)/σy,i and where we used the geometric series to obtain
k−1∑ ℓ=0 (1− ησ2y,i)ℓ = 1− (1− ησ2y,i)k ησ2y,i .
Note that for k → ∞ and choosing η such that 1− ησ2y,i < 1 for all i we get
W∞ = XVyΣy −1UTy = XY †. (5)
Evaluating the risk from equation (1) at the estimator Wk gives
R(Wk) = 1
d ∥∥(XVyDkUTy − I)U∥∥2F + σ2zd ∥∥XVyDkUTy ∥∥2F . (6) To shorten notation we define
γ := (d+ nσ2z) log(n)
N (7)
ψ := nσ2z log(n)
N . (8)
We next provide bounds for the two terms on the right-hand-side of equation (6), proven later in this section.
Bound on the first term in equation (6): In Section G.1 we show that provided N log(N) ≤ n, 9d ≤ N and (d+ nσ2z) log(n) ≤ N , for some constant c, with probability at least 1− 2e−N/18 − 3n−10 − 3e−d − e−n − e−N − 2e−n/2, the following bound holds:
1
d ∥∥(XVyDkUTy − I)U∥∥2F ≤ cd (σ2zn+ γσ2x,max) d∑
i=1
1
σ2y,i (1− (1− ησ2y,i)k)2 +
c
d d∑ i=1 (1− ησ2y,i)2k
+ c
d ψγ N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2 + cγ. (9)
Bound on the second term in equation (6): In Section G.2 we show
σ2z d ∥∥XVyDkUTy ∥∥2F ≤ σ2zd N∑ i=1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (10)
With equations (9) and (10) in place and by splitting up the sum in (10) we can bound the right hand side of equation (6) as
R(Wk) ≤ 1 d
( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) d∑ i=1 1 σ2y,i (1− (1− ησ2y,i)k)2 + c d d∑ i=1 (1− ησ2y,i)2k
+ cγ + c
d
( ψγ + σ2z ) N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (11)
Since the singular values of Y are in descending order σy,1 ≥ σy,2 ≥ . . ., this is, for any iteration k, bounded as
R(Wk) ≤ ( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) 1 σ2y,d + c(1− ησ2y,d)2k
+ cγ + c
d
( ψγ + σ2z ) σ2x,max N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2. (12)
This is a good bound if we’re at an iteration sufficiently large so that (1− ησ2y,1)k is small.
In (12) we have (1− ησ2y,d)2k that is decreasing in the number of gradient descent steps k and also decreasing in the stepsize η as long as ησ2y,i ≤ 1 for i = 1, . . . , d. Further, we have (1−(1−ησ2y,i)k)2 that is increasing in k and also increasing in η. The first term corresponds to the signal that we want to fit sufficiently well, whereas the second term corresponds to the noise from which we want to fit as little as possible. Hence, there exist optimal choices for k, η that trade-off the sum of the two terms.
In our setup the d leading singular values of Y corresponding to the signal are large and concentrate around N(1 + σ2z), while the remaining singular values are small and concentrate around Nσ 2 z . Hence, we can apply a single step of gradient descent k = 1 to already fit a large portion of the signal, while minimizing the portion of the noise that is fitted. For that we choose the stepsize η as large as possible such that ησ2y,i ≤ 1 for i = 1, . . . , d still holds. Next, suppose the following events hold
E1 = {σ2y,d+1 ≤ N ( σ2z + ϵ(1 + σ 2 z) ) } (13)
E2 = {σ2y,d ≥ N(1 + σ2z)(1− ϵ)} (14) E3 = {σ2x,max ≤ 4N} (15) E4 = {σ2y,1 ≤ N(1 + ϵ)(1 + σ2z)}. (16)
In Section H.4, we show that
P [E3] ≥ 1− 2e−N/8. (17)
In Section H.4, we also show that, provided that N ≥ 3Cϵ−2(d+ σ2zn) log n, for some constant C and ϵ ∈ (0, 1),
P [E1] ,P [E2] ,P [E4] ≥ 1− e−d − e−n − n−9. (18)
We next bound the terms in equation (12). As discussed, we set k = 1 and the stepsize as large as possible, i.e. η = 1/ ( N(1 + ϵ)(1 + σ2z) ) ≤ 1/σ2y,1, which holds on event E4. Finally, on event E2 we obtain
c(1− ησ2y,d)2k ≤ c(1− ηN(1 + σ2z)(1− ϵ))2 = c ( 1− 1− ϵ
1 + ϵ
)2 ≤ cϵ2.
Next we bound the sum in equation (12). Towards this goal, we upper bound each term in the sum with its linear approximation at the origin. We compute the derivative at the origin as
lim q→0
∂
∂q
1 q (1− (1− ηq)k)2 = (ηk)2.
Thus, we have
N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2 ≤ N∑ i=d+1 k2η2σ2y,i ≤ Nk2η2σ2y,d+1 ≤ σ2z + ϵ+ ϵσ 2 z (1 + ϵ)2(1 + σ2z) 2 ≤ c(σ2z + 2ϵ),
on the event E1 and for σ2z ≤ 1, k = 1 and η = 1/ ( N(1 + ϵ)(1 + σ2z) ) . Putting this together and on the events E2, E3 we get the bound
R(Wk) ≤ 8 σ 2 z
1 + σ2z + cγ + c(1− ησ2y,d)2k +
c
d
( ψγ + σ2z ) N N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2
≤ 8 σ 2 z
1 + σ2z + cγ + cϵ2 +
cN
d
( σ2zn log n
N
(d+ nσ2z) log n
N + σ2z
) (σ2z + 2ϵ)
≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + cϵ2 + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
) (σ2z + 2ϵ).
(19)
The bound holds provided 3Cϵ−2(d+ σ2zn) log n ≤ N and N log(N) ≤ n, for some constants c, C and with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2. To maximize the benefit from early stopping we set ϵ as small as possible with respect to the condition 3Cϵ−2(d+ σ2zn) log n ≤ N
ϵ =
√ 3C(d+ σ2zn) log n
N . (20)
With that equation (19) becomes
R(Wk) ≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
)( σ2z + √ (d+ σ2zn) log n
N
)
= 8R(W | 1. What is the focus of the paper regarding deep learning-based inverse problems?
2. What are the strengths and weaknesses of the proposed approach, particularly in its empirical results and conclusions?
3. Do you have any concerns regarding the experimental setup and the assumption made in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for future works related to the topic? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work investigated on the relationships between the performance in deep learning based inverse problems and the size of the training dataset. Denoising, compressive sensing MRI and super resolution problems were studied for U-Net and SwinIR. Theoretical analyses were also presented under simplified conditions. The conclusion of this work was that more training images beyond a certain point do not improve performance once the error induced by learning the signal model is small relative to the error floor, which was intuitively considered.
Strengths And Weaknesses
Strengthes:
Comprehensive empirical results are presented.
Weaknesses:
Experiments seem to be limited to a very small set of all possible cases so it is unclear if the conclusion is well supported by the current experiments (see below for more details).
It is unclear how the proposed conclusion can help researchers in inverse problems. Its usefulness and practical value are not really clear.
The assumption on "Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains" seems somewhat biased in my view. Tasks are different and the way of using one image is different (e.g., one image can not be divided into two or more independent images for a label in classification task, but it is common that multiple independent patches are extracted for training in denoising or super resolution tasks). Moreover, see (Brooks et al., 2019) that used 1M training samples for training a denoiser. Thus, it is hard to agree with the motivation of this work.
The analysis with a linear estimation learned with early stopped gradient descent seems quite far from modern deep learning based methods that are highly non-linear and fully converged. Thus, all the conclusions made in this work seem over-claimed.
Clarity, Quality, Novelty And Reproducibility
This work presented limited experimental results only using U-Net and SwinIR. The trends of these results look similar, but they are different in details. Thus, it will be also different for other networks. Can we say that there is a common phenomenon by looking into two different examples? More comprehensive experiments with a lot more networks may help to find this common aspect. Moreover, the most worried concern that I have is this question: "Are U-Net or SwinIR scalable?" It may be possible that this work simply investigated two non-scalable networks and there might be a scalable network for inverse problems. Another possible bad scenario is that the observed trends may not be the result of large training data, but the result of difficult global convergence for high dimensional spaces! Thus, in my view, it seems that this work may not be ready to draw the conclusion that the title and the conclusion claimed.
The experiments in this work seem to be limited in many aspects besides the network types. This work covers the size of the training set up to 100K, but as mentioned above, there was a work using 1M samples (Brooks et al., 2019). It will be more convincing to cover larger settings than current settings to show really interesting areas to look into. Moreover, the way of increasing the size of networks is important (e.g., increasing depth, width and/or channel), but this work did not discuss this aspect at all. While it was claimed that the optimal parameter number was found for each case, but it is unclear if it is really optimal in terms of the way of increasing the network size. how about using EfficientNets since they are providing different network sizes? |
ICLR | Title
Scaling Laws For Deep Learning Based Image Reconstruction
Abstract
Deep neural networks trained end-to-end to map a measurement of a (noisy) image to a clean image perform excellent for a variety of linear inverse problems. Current methods are only trained on a few hundreds or thousands of images as opposed to the millions of examples deep networks are trained on in other domains. In this work, we study whether major performance gains are expected from scaling up the training set size. We consider image denoising, accelerated magnetic resonance imaging, and super-resolution and empirically determine the reconstruction quality as a function of training set size, while simultaneously scaling the network size. For all three tasks we find that an initially steep powerlaw scaling slows significantly already at moderate training set sizes. Interpolating those scaling laws suggests that even training on millions of images would not significantly improve performance. To understand the expected behavior, we analytically characterize the performance of a linear estimator learned with early stopped gradient descent. The result formalizes the intuition that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
1 INTRODUCTION
Deep neural networks trained to map a noisy measurement of an image or a noisy image to a clean image give state-of-the-art (SOTA) performance for image reconstruction problems. Examples are image denoising (Burger et al., 2012; Zhang et al., 2017; Brooks et al., 2019; Liang et al., 2021), super-resolution (Dong et al., 2016; Ledig et al., 2017; Liang et al., 2021), and compressive sensing for computed tomography (CT) (Jin et al., 2017), and accelerated magnetic resonance imaging (MRI) (Zbontar et al., 2018; Sriram et al., 2020; Muckley et al., 2021; Fabian & Soltanolkotabi, 2022).
The performance of a neural network for imaging is determined by the network architecture and optimization, the size of the network, and the size and quality of the training set.
Significant work has been invested in architecture development. For example, in the field of accelerated MRI, networks started out as convolutional neural networks (CNN) (Wang et al., 2016), which are now often used as building blocks in un-rolled variational networks (Hammernik et al., 2018; Sriram et al., 2020). Most recently, transformers have been adapted to image reconstruction (Lin & Heckel, 2022; Huang et al., 2022; Fabian & Soltanolkotabi, 2022).
However, it is not clear how substantial the latest improvements through architecture design are compared to potential improvements expected by scaling the training set and network size. Contrary to natural language processing (NLP) models and modern image classifiers that are trained on billions of examples, networks for image reconstruction are only trained on hundreds to thousands of example images. For example, the training set of SwinIR (Liang et al., 2021), the current SOTA for image denoising, contains only 10k images, and the popular benchmark dataset for accelerated MRI consists only of 35k images (Zbontar et al., 2018).
In this work, we study whether neural networks for image reconstruction only require moderate amounts of data to reach their peak performance, or whether major boosts are expected from increasing
the training set size. To partially address this question, we focus on three problems: image denoising, reconstruction from few and noisy measurements (compressive sensing) in the context of accelerated MRI, and super-resolution. We pick Gaussian denoising for its practical importance and since it can serve as a building block to solve more general image reconstruction problems well (Venkatakrishnan et al., 2013). We pick MR reconstruction because it is an important instance of a compressive sensing problem, and many problems can be formulated as compressive sensing problems, for example super-resolution and in-painting. In addition, for MR reconstruction the question on how much data is needed is particularly important, since it is expensive to collect medical data.
For the three problems we identify scaling laws that describe the reconstruction quality as a function of the training set size, while simultaneously scaling network sizes. Such scaling laws have been established for NLP and classification tasks, as discussed below, but not for image reconstruction.
The experiments are conducted with a U-Net (Ronneberger et al., 2015) and the SOTA SwinIR (Liang et al., 2021), a transformer architecture. We primarily consider the U-Net since it is widely used for image reconstruction and acts as a building block in SOTA models for image denoising (Brooks et al., 2019; Gurrola-Ramos et al., 2021; Zhang et al., 2021; 2022) and accelerated MRI (Zbontar et al., 2018; Sriram et al., 2020). We also present results for denoising with the SwinIR. The SwinIR outperforms the U-Net, but we find that its scaling with the number of training examples does not differ notably. Our contributions are as follows:
• Empirical scaling laws for denoising. We train U-Nets of sizes 0.1M to 46.5M parameters with training set sizes from 100 to 100k images from the ImageNet dataset (Russakovsky et al., 2015) for Gaussian denoising with 20.17dB Peak-Signal-to-Noise ratio (PSNR). While for the largest training set sizes and network sizes we consider, performance continues to increase, the rate of performance increase for training set sizes beyond a few thousand images slows to a level indicating that even training on millions of images only yields a marginal benefit, see Fig. 1(a).
We also train SOTA SwinIRs of sizes 4M to 129M parameters on the same range of training set sizes, see Fig. 2(a). Albeit it performs better than the U-Net, its scaling behavior is essentially equivalent, and again after a few thousand images, only marginal benefits are
expected. Scaling up its training set and network size did, however, give benefits, the largest model we trained yields new SOTA results on four common test sets by 0.05 to 0.22dB.
• Empirical scaling laws for compressive sensing. We train U-Nets of sizes from 2M to 500M parameters for 4x accelerated MRI with training set sizes from 50 to 50k images from the fastMRI dataset (Knoll et al., 2020b). We again find that beyond a dataset size of about 2.5k, the rate of improvement as a function of the training set size considerably slows, see Fig. 1(c). This indicates that while models are expected to improve by increasing the training set size, we expect that training on millions of images does not significantly improves performance.
• Empirical scaling laws for super-resolution. We also briefly study super-resolution, and similarly as for denoising and compressive sensing find a slowing of the scaling law at moderate dataset sizes. See appendix C.
• Understanding scaling laws for denoising theoretically. Our empirical results indicate that the denoising performance of a neural network trained end-to-end doesn’t increase as a function of training examples beyond a certain point. This is expected since once we reach a noise specific error floor, more data is not beneficial. We make this intuition precise for a linear estimator learned with early stopped gradient descent to denoise data drawn from a d-dimensional linear subspace. We show that the reconstruction error is upper bounded by d/N plus a noise-dependent error floor level. Once the error induced by learning the signal model, d/N , is small relative to the error floor, more training examples N are not beneficial.
Together, our empirical results show that neural networks for denoising, compressive sensing, and super-resolution applied in typical setups (i.e, Gaussian denoising with 20.17dB PSNR, and multi-coil accelerated MRI with 4x acceleration) already operate in a regime where the scaling laws for the training set size are slowing significantly and thus even very large increases of the training data are not expected to improve performance substantially. Even relatively modest model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
2 RELATED WORK
Scaling laws for prediction problems. Under the umbrella of statistical learning theory convergence rates of 1/N or 1/ √ N have been established for a range of relatively simple models and distributions, see e.g. Wainwright (2019) for an overview.
For deep neural networks used in practice, a recent line of work has empirically characterized the performance as a function of training set size and/or network size for classification and NLP (Hestness et al., 2017; Rosenfeld et al., 2019; Kaplan et al., 2020; Bahri et al., 2021; Zhai et al., 2021; Ghorbani et al., 2021; Bansal et al., 2022). In those domains, the scaling laws persist even for very large datasets, as described in more detail below. In contrast, for image reconstruction, we find that the power-law behavior already slows considerably at relatively small numbers of training examples.
Rosenfeld et al. (2019) find power-law scaling of performance with training set and network size across models and datasets for language modeling and classification. However, because the work fixes either training set or network size, while scaling the other, the scaling laws span only moderate ranges before saturating at a level determined by the fixed quantity and not the problem specific error floor. The papers Kaplan et al. (2020); Bahri et al. (2021); Zhai et al. (2021); Ghorbani et al. (2021); Bansal et al. (2022) including ours scale the dataset size and model size simultaneously, resulting in an improved predictive power of the obtained scaling laws.
Hestness et al. (2017) study models for language and image classification, and attribute deviations from a power-law curve to a lack of fine-tuning the hyperparameters of very large networks. For transformer language models Kaplan et al. (2020) find no deviation from a power-law for up to a training set and network size of 1B images and parameters. Further, Zhai et al. (2021) find the performance for Vision Transformers (Dosovitskiy et al., 2020) for few-shot image classification to deviate from a power-law curve only at extreme model sizes of 0.3-1B parameters and 3B images.
The role of training set size in inverse problems. For image reconstruction and inverse problems in general, we are not aware of work studying scaling laws in a principled manner, covering different
problems and model architectures. But, when proposing a new method several works study performance as a function of training set and/or network size. However, those studies typically only scale the parameter of interest while fixing all other parameters, unlike our work, which scales dataset size and network size together, which is important for identifying scaling laws. Below, we review the training set sizes, and if available their scaling properties, used by the recent SOTA in image denoising and accelerated MRI.
Zhang et al. (2017) report that for DnCNN, a standard CNN, using more than 400 distinct images with data augmentation only yields negligible improvements. Chen et al. (2021) pre-train an image processing transformer (IPT) of 115.5M network parameters on ImageNet (1.1M distinct images), and report the performance after fine-tuning to a specific task as a function of the size of the pretraining dataset. IPT’s performance for denoising is surpassed by the latest SOTA in form of the CNN based DRUnet (Chen et al., 2021), the transformer based SwinIR (Liang et al., 2021) and the Swin-Conv-Unet (Zhang et al., 2022) a combination of the two. Those models have significantly fewer network parameters and were trained on a training set consisting of only ∼10k images, leaving the role of training set size in image denoising open.
The number of available training images for accelerated MRI is limited to few publicly available datasets. Hence, current SOTA (Sriram et al., 2020; Fabian & Soltanolkotabi, 2022) are trained on the largest datasets available (the fastMRI dataset), consisting of 35k images for knee and 70k for brain reconstruction (Zbontar et al., 2018; Muckley et al., 2021).
Adaptations of the MLP-Mixer (Tolstikhin et al., 2021) and the Vision Transformer (Dosovitskiy et al., 2020) to MRI (Mansour et al., 2022; Lin & Heckel, 2022) ablate their performance as a function of the training set size and find that non-CNN based approaches require more data to achieve a similar performance, but potentially also benefit more from even larger datasets. However, those experiments are run with fixed model sizes and only 3-4 realizations of training set size thus it is unclear when performance saturates.
3 EMPIRICAL SCALING LAWS FOR DENOISING
In this section, we consider the problem of estimating an image x from a noisy observation y = x+z, where z is additive Gaussian noise z ∼ N (0, σ2zI). Neural networks trained end-to-end to map a noisy observation to a clean image perform best, and outperform classical approaches like BM3D (Dabov et al., 2007) and non-local means (Buades et al., 2005) that are not based on training data.
Datasets. To enable studying learned denoisers over a wide range of training set sizes we work with the ImageNet dataset (Russakovsky et al., 2015). Its training set contains 1.3M images of 1000 different classes. We reserve 20 random classes for validation and testing. We design 10 training set subsets SN of sizes N ∈ [100, 100000] (see Fig. 1(a)) and with Si ⊆ Sj for i ≤ j. To make the distributions of images between the subsets as homogeneous as possible, the number of images from
different classes within a subset differ by at most one. We generate noisy images by adding Gaussian noise with variance of σz = 25 (pixels in the range from 0 to 255), i.e. 20.17dB in PSNR.
Model variants and training. We train a CNN-based U-Net, a detailed description of the model architecture is in Appx. A.1. We also train the Swin-Transformer (Liu et al., 2021) based SwinIR model (Liang et al., 2021) that achieves SOTA for a variety of image reconstruction tasks, including denoising. This model is interesting since transformers scale well with the number of training examples for other domains, e.g., for image classification pre-training (Dosovitskiy et al., 2020).
For U-Net and SwinIR we vary the number of network parameters as depicted in Fig. 1 (b) and Fig. 2 (b) respectively. Appx. A.1 and A.3 contain detailed descriptions on how the models are trained and the network size is adapted.
Results and discussion. Fig. 1(a) and Fig. 2(a) show the reconstruction performances of the best U-Net and SwinIR respectively over all considered network sizes. Our main findings are as follows.
A training set size of around 100 is already sufficient to train a decent image denoiser. Beyond 100, we can fit a linear power law to the performance of the U-Net with a scaling coefficient α = 0.0048 that approximately holds up to training set sizes of about 6k images. Beyond 6k, we can fit a second linear power law with significantly smaller scaling coefficient α = 0.0019.
Similarly, for the SwinIR we can fit a power law with coefficient α = 0.0050 in the regime of little training data and a power law with significantly smaller coefficient α = 0.0017 in the regime of moderate training data, thus the two architectures scale essentially equivalently.
While denoising benefits from more training examples, the drop in scaling coefficient indicates that scaling from thousands to millions of images is only expected to yield relatively marginal performance gains. While the gains are small and we expect gains from further scaling to be even smaller, our largest SwinIR trained on 100k images yields a new SOTA for Gaussian color image denoising. On four common test sets it achieves an improvement between 0.05 to 0.22dB, see Appx. B. Visualizations of reconstructions from models along the curves in Fig. 1(a) and Fig. 2(a) can be found in Fig. 4, Appx. A.
Comparing the U-Net scaling to SwinIR scaling (Fig. 2(a)) we see that the effect of large training sets on the performance of the U-Net can not make up for the improved modeling error that stems from the superior network architecture of the SwinIR. In fact, the simulated scaling coefficient α = 0.0019 predicts a required training set size of about 150M images for U-Net to achieve the best performance of the SwinIR, and that is an optimistic prediction since it assumes that the scaling does not slow down further for another 3 orders of magnitude. Thus, model improvements such as those obtained by transformers over convolutional networks are larger than what we expect from scaling the number of training examples from tens of thousands to millions.
An interesting question is whether different noise levels lead to different data requirements, as indicated by a different scaling behavior. We investigate the performance of U-Net for a smaller noise variance σz = 15 in Appx. D.2. While the results are still preliminary, the qualitative scaling behavior is similar; the difference are that the scaling law pertaining to smaller noise is slightly steeper, and overall the performance improves by about 1.3dB in PSNR.
Finally, note that for our results we choose to re-sample the noise per training image in every training epoch, since it makes the best use of the available clean images. However, fixing the noise is also interesting, since it is more similar to a setup in which the noise statistics are unknown like in real-world noise removal. See Appx. D.1 for additional results where the noise is fixed. We found that compared to re-sampling the noise the performance of a U-Net drops by 0.3dB for small training set sizes. As the training set size increases the performance approaches the one of re-sampling the noise resulting in a steeper scaling law that flattens at slightly larger training set sizes.
Robustness of our finding of slowing scaling laws. Our main finding for denoising is that the scaling laws slow at relatively small amounts of training data points (i.e., the scaling coefficient becomes very small). We next argue why we think that this is a robust finding.
In principle it could be that the networks are not sufficiently large for the performance to improve further as the dataset size increases. However, for the U-Net and training set sizes of 10k and 30k the
performance increase already slows, and for those training set sizes, increasing the network size does not improve performance, as shown in Fig. 1(b). For smaller network sizes the curves in Fig. 1(b) first increase and then decrease indicating that we found network sizes close to the optimum.
The parameter scaling of the SwinIR in Fig. 2(b) indicates that for training set sizes of 10k and 100k the performance still benefits from larger networks. Hence, the power law for those training set sizes in Fig. 2(a) can become slightly steeper once we further increase the network size. However, for the slope of the flat power law in (a) to reach the slope of the steep power law, the parameter scaling in (b) for 100k training images would need to hold for another 2 orders of magnitude, i.e. about 12B instead of the currently used 129M network parameters, which is very unlikely considering that the current network sizes already suffice to saturate the performance for small/moderate training set sizes.
Finally, it could be that with higher quality or higher diversity of the data, the denoising performance would further improve. In our experiments, we increase the training set sizes by adding more images from the same classes of ImageNet, and we continue to test on different classes. Thus, it could be that adding training examples from a different, more diverse data source leads to less of a slowing of the scaling law in Fig. 1(a). We test this hypothesis by scaling our training set with 3k images to 10k images not by adding more images from ImageNet but images from the datasets that are used to train the original SwinIR, i.e., we add all images from DIV2K (Agustsson & Timofte, 2017), Flickr2K (Timofte et al., 2017) and BSD500 (Arbeláez et al., 2011) and add the remaining images from WED (Ma et al., 2017). Keeping all other hyperparameters fixed the U-Net obtains for this dataset a PSNR of 32.239dB, which is slightly worse than the 32.252dB it achieves with our training set with 10k images only from ImageNet. Hence, we reject the hypothesis that the drop in scaling coefficients can be explained with how the training sets are designed in our experiment.
4 EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING
We consider compressive sensing (CS) to achieve 4x accelerated MRI. MRI is an important medical imaging technique for its non-invasiveness and high accuracy. Due to the physics of the measurement process, MRI is inherently slow. Accelerated MRI aims at significantly reducing the scan time by undersampling the measurements. In addition, MRI scanners rely on parallel imaging, which uses multiple receiver coils to simultaneously collect different measurements of the same object.
This leads to the following reconstruction problem. We are given measurements yi ∈ Cm of the image x ∈ Cn as yi = MFSix+ noisei for i = 1, ..., C, and our goal is to reconstruct the image from those measurements. Here, C is the number of receiver coils, the diagonal matrices Si ∈ Cn×n are the sensitivity maps modelling the signal strength perceived by the i-th coil, F ∈ Cn×n is the discrete Fourier transform (DFT), and M ∈ Cm×n is a binary mask implementing the undersampling. Classical CS approaches (Lustig et al., 2008) first estimate the sensitivity maps with a method such as ESPIRiT (Uecker et al., 2014), and second estimate the unknown image by solving a regularized optimization problem, such as total-variation norm minimization. Recently, deep learning based methods have been shown to significantly outperform classical CS methods due to their ability to learn more complex and accurate signal models from data (Knoll et al., 2020a; Muckley et al., 2021).
Datasets. To explore the performance of learning based MR reconstruction over a wide range of training set sizes, we use the fastMRI multi-coil brain dataset (Knoll et al., 2020b), which is the largest publicly available dataset for MRI. The dataset consists of images of different contrasts. To ensure that the statistics of the dataset are as homogeneous as possible, we take the subset of the dataset corresponding to a single contrast resulting in 50k training images. We design training set subsets SN of size N ∈ [50, 50000] (see Fig. 1 (c)) with Si ⊆ Sj for i ≤ j. For more information on selection, division and subsampling of the dataset see Appx. A.2.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with 4 blocks per encoder/decoder. The network is trained end-to-end to map a coarse reconstruction x̂CR to the ground truth image x. The coarse reconstruction is obtained as
x̂CR = (∑C i=1 |F−1yi,ZF|2 )1/2
,where F−1 is the inverse DFT, and yi,ZF are the undersampled measurements of the i-th coil, where missing entries are filled with zeros. We vary the number of
network parameters as depicted in Fig. 1 (d). Appx. A.2 contains detailed descriptions on how the model is trained and the network size is adapted.
Results and discussion. For each training set size Fig. 1 (c) shows the reconstruction performance in structural similarity (SSIM) of the best model over all simulated network sizes. There are 2 main findings. First, we can fit a linear power law with a scaling coefficient α = 0.0072 that holds up to training set sizes of about 2.5k images. Second, for training set sizes starting from 5k we can fit a second linear power law but with significantly smaller scaling coefficient α = 0.0026.
Similar to denoising in Section 3 we conclude that while accelerated MRI slightly benefits form more training examples, the drop in scaling coefficient indicates that it is unlikely that even training set sizes of the order of hundreds of millions of images result in substantial gains. Visualizations of reconstructions from models along the curve in Fig. 1 (c) can be found in Fig. 5, Appx. A.2.
Robustness of our finding of slowing scaling laws. Next, we argue why the drop in scaling coefficient for accelerated MRI is a robust finding.
In Fig. 1 (d) we demonstrate that the network size does not bottleneck the performance of our largest training sets. For each training set size the performance as a function of the number of network parameters is relatively flat. Even small training sets benefit from large networks before their performance slightly decays for very large networks. Hence, we expect that for large training sets the performance as a function of network parameters would not increase significantly before decaying.
Finally, is there a different type of training data that would improve the scaling coefficient? We don’t think so, since all experiments are for the very specific task of reconstructing brain images of one particular contrast, which means that the examples that we add to increase the size of the training sets is already the data most suitable to solve this task.
5 UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY
We study a simple linear denoiser that is trained end-to-end to reconstruct a clean image from a noisy observation, in order to understand how the error as a function of the number of training examples for inverse problems is expected to look like.
We define a joint distribution over a signal and the corresponding measurement (x,y) as follows. Consider a d < n dimensional subspace of Rn parameterized by the orthonormal basis U ∈ Rn×d. We draw a signal approximately uniformly from the subspace x = Uc, where c ∼ N (0, I), and an associated noisy measurement as y = x+ z, where z ∼ N (0, σ2zI) is Gaussian noise. The subspace is unknown, but we are given a training set {(x1,y1), . . . , (xN ,yN )}, consisting of examples drawn iid from the joint distribution over (x,y).
We assume that the signal lies in a low dimensional subspace. Assuming that data lies in a lowdimensional subspace or more general, a union of low-dimensional subspaces, is common, for example it underlies the denoising of natural images via wavelet thresholding (Donoho & Johnstone, 1995; Simoncelli & Adelson, 1996; Chang et al., 2000). Mohan et al. (2020) found that even deep learning based denoisers implicitly perform a projection onto an adaptively-selected low-dimensional subspace that captures the features of natural images.
We consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error, defined as R(W) = 1dE [ ∥Wy − x∥22 ] , where
expectation is over the joint distribution of (x,y).
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk R) is given by W∗ = 11+σ2z UU
T . The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
An estimator based on subspace estimation. The optimal estimator W∗ requires knowledge of the unknown subspace. We can learn the subspace from the noisy data by performing principal component analysis on Y = [y1, . . . ,yN ]. Specifically, we estimate the subspace as the d-leading
singular vectors of the empirical co-variance matrix YYT ∈ Rn×n, denoted by by Û ∈ Rn×d. If the measurement noise is zero (i.e., σ2z = 0), Û is an orthonormal basis for the d-dimensional subspace U provided we observe at least d many linearly independent signals, which occurs with probability one if we draw data according to the model defined above, and if the number of training examples obeys N ≥ d. There is a vast literature on PCA; see Vershynin (2011; 2018); Rudelson & Vershynin (2010); Tropp (2012) for tools from high-dimensional probability to analyze the PCA estimate.
Now consider the estimator WPCA = 11+σ2z ÛÛ Ty based on N noisy training points. The estimator assumes knowledge of the noise variance, but it is not difficult to estimate it relatively accurately. The following result, proven in Appx. F, characterizes the associated risk. Theorem 1. Suppose that the number of training examples obeys (d + nσ2z) log(n) ≤ N . For a numerical constant c, with probability at least 1− n−10 − 3e−d + e−n, the risk of the PCA-estimate is bounded by
R (WPCA) ≤ R(W∗) + c(d+ nσ2z) log(n)/N.
Thus, as long as the number of training examples N is sufficiently large relative to (d+ nσ2z), the risk of the PCA-estimator is close to the risk of the optimal estimator.
Estimator learned end-to-end. We now consider an estimator learned end-to-end, by applying gradient descent to the empirical risk L(W) = ∑N i=1 ∥Wyi − xi∥ 2 2 and we regularize via earlystopping the gradient descent iterations. The risk of the estimate after k iterations of gradient descent, Wk, is bounded by the next result. This estimator mimics the supervised training we consider throughout this section, with the difference that here we consider a simple neural network, and in the previous sections we trained a neural network. Theorem 2. Let Wk be the matrix obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W). Consider the regime where the number of training examples obeys (d + nσ2z) log(n) ≤ N ≤ ξd/σ2z for an arbitrary ξ and N log(N) ≤ n. For an appropriate choice of the stepsize η there exists an optimal early stopping time kopt at which the risk of the estimator fW(y) = Wkopty is upper-bounded with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2 by
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N ,
where c is a numerical constant.
The proof is provided in Appx. G. Similar to Thm. 1 the first term in the bound corresponds to the noise-dependent error floor and the second two terms decrease in (d + nσ2z)/N and represent the error induced by learning the estimator with a finite number of training examples. Once this error is small relative to the error floor, more training examples do not improve performance. See Appx. E for a more detailed discussion on the estimator and the assumptions of Thm. 2.
Discussion. In Fig. 3, we plot the risk as a function of the training set size for the early-stopped empirical risk minimization (ERM) and the PCA based estimator. Starting at a training set size of N = 100, we see that the risk minus the optimal risk follows approximately a power law, i.e., log(R(W)−R(W∗)) ≈ −α log(N), as suggested by Thms. 1 and 2. In practice, however, we don’t know the risk of the optimal estimator, R(W∗). Therefore, in the second row of Fig. 3 and throughout our empirical results we plot the risk log(R(W)) as a function of the number of training examples log(N) and distinguish different regions by fitting approximated scaling coefficients α to log(R(W)) ≈ −α log(N). In our theoretical example, we can identify three regions, coined in Hestness et al. (2017): The small data region in which the training set size does not suffice to learn a well-performing mapping and the power-law region in which the performance decays approximately as Nα with α < 0, and an problem-specific irreducible error region, which is R(W∗) here. Note that the power-law coefficient in the risk plot R(W) are smaller then those when plotting R(W)−R(W∗), an are only approximate scaling coefficients. Already in this highly simplified setup of an inverse problem the true scaling coefficients heavily depend on model parameters such as the noise variance as can be seen in Fig. 3. The scaling coefficients further vary with the signal dimension d and ambient dimension n (see Appx. E.1).
6 CONCLUSION
We found that the performance improvement of deep learning based image reconstruction as a function of the number of training examples slows already at moderate training set sizes, indicating that only marginal gains are expected beyond a few thousand examples.
Limitations. This finding is based on studying three different reconstruction problems (denoising, super-resolution, and compressive sensing), two architectures (U-net and SwinIR), and for each setup we extensively optimized hyperparameters and carefully scaled the networks. Scaling gave new state-of-the-art results on four denoising datasets, which provides some confidence in the setup.
Moreover, our statements necessarily pertain to the architectures and metrics we study. It is possible that other architectures or another scaling of architectures can yield larger improvements when scaling architectures. It is also widely acknowledged that image quality metrics (such as SSIM and PSNR) do not fully capture the perceived image quality by humans, and it could be that more training data yields improvements in image quality that are not captured well by SSIM and PSNR.
Most importantly, our findings pertain to standard in-distribution evaluation, i.e., the test images are from the same distribution as the training images. To achieve robustness against testing on images out of the training distribution recent work indicates a positive effect of larger and more diverse training sets (Miller et al., 2021; Darestani et al., 2022; Nguyen et al., 2022). Thus it is possible that the out-of-distribution performance of image reconstruction methods improves when trained on a larger and a more diverse dataset, even though the in-distribution performance does not improve further.
Future research. We focus on supervised image reconstruction methods. For image denoising and other image reconstruction problems self-supervised approaches are also performing very well (Laine et al., 2019; Wang et al., 2022; Zhou et al., 2022), even though supervised methods perform best if clean images are available. It would be very interesting to investigate the scaling laws for selfsupervised methods; perhaps more training examples are required to achieve the performance of a supervised setup. We leave this for future work.
REPRODUCIBILITY
The repository at https://github.com/MLI-lab/Scaling_Laws_For_Deep_ Learning_Based_Image_Reconstruction contains the code to reproduce all results in the main body of this paper.
ACKNOWLEDGMENTS
The authors acknowledge support by the Institute of Advanced Studies at the Technical University of Munich, the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 456465471, 464123524, the DAAD, and the German Federal Ministry of Education and Research and the Bavarian State Ministry for Science and the Arts. The authors of this work take full responsibility for its content.
A DETAILS OF THE EXPERIMENTAL SETUPS
A.1 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a U-Net presented in Fig.1 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 1 (a). In these examples the improvement in perceived image quality from increasing the training set size from 100 to 1000 is larger than from increasing from 1000 to 10000 or from 10000 to 100000. This correlates with our quantitative findings in Fig. 1 (a).
Next, we describe the experimental details. We train U-Nets with two blocks in the encoder and decoder part respectively and skip connections between blocks. Each block consists of two convolutional layers with LeakyReLU activation and instance normalization (Ulyanov et al., 2017) after every layer, where the number of channels is doubled (halved) after every block in the encoder (decoder). The downsampling in the encoder is implemented as average pooling and the upsampling in the decoder as transposed convolutions. As proposed by Zhang et al. (2017) we train a residual denoiser that learns to predict y − x instead of directly predicting x, which improves performance. We scale the network size by increasing the width of the network by scaling the number of channels per (transposed) convolutional layer. For denoising we trained U-Nets of 7 different sizes. We vary the number of channels in the first layer in {16, 32, 64, 128, 192, 256, 320}, which corresponds to {0.1, 0.5, 1.9, 7.4, 16.7, 30.0, 46.5} million parameters. We do not vary the depth, since this would change the dimension of the informational bottleneck in the U-Net and thus change the model family itself.
The exact training set sizes we consider are {0.1, 0.3, 0.6, 1, 3, 6, 10, 30, 60, 100} thousand images from ImageNet. We center crop the training images to 256 × 256 pixels. We also tried a smaller patch size of 128× 128 pixels, but larger patches showed to have a better performance in the regime of large training set sizes, which is why the results presented in the main body are for patch size 256× 256. For a comparison between the scaling behavior of the two different patch sizes see Fig. 9, Appx. D.3.
We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. For validation and testing we use 80 and 300 images respectively taken from 20 random classes that are not used for training.
We use mean-squared-error loss and Adam (Kingma & Ba, 2014) optimizer with β1 = 0.9, β2 = 0.999. For moderate training set sizes up to 3000 images we find that heuristically adjusting the initial learning rate with the help of an automated learning rate annealing performs well. To this end, we start with a learning rate of 10−4 and increase after every epoch by a factor of 2 until the validation loss does not improve for 3 consecutive epochs. We then load the model checkpoint from the learning rate that was still performing well and continue with that learning rate. We observe that this scheme typically picks an initial learning rate reduced by factor 2 for every increase in the number of channels C. For training set sizes larger than 3000 we directly apply this rule to pick an initial learning rate without annealing as we found this to give slightly better results. In particular, for number of channels C = {128, 192, 256, 320} we used initial learning rates η = {0.0032, 0.0016, 0.0008, 0.0004}. For small training sets up to 1000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10 and found that further increasing the batch size does not improve performance.
We do not put a limit on the amount of compute for training. We use an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 10 epochs or 6 epochs for training set sizes starting from 6000 images. Once the learning rate drops to 10−5 we observe near to no gains in validation loss and stop the training after 10 additional epochs.
For training set sizes up to 1000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1800 GPU hours for the experiments in Fig. 1(a),(b).
A.2 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR COMPRESSIVE SENSING WITH A U-NET
In this Section, we give a detailed description of the experimental setup that led to our results for compressive sensing MRI presented in Fig.1 (c),(d) and Section 4.
In addition, Fig. 5 shows examples of reconstructions from different models along the performance curve in Fig. 1 (c). In these examples the improvement in perceived image quality from increasing the training set size from 500 to 2500 is larger than from increasing from 2500 to 10000 or from 10000 to 50000. This correlates with our quantitative findings in Fig. 1 (c).
The first in row in Fig. 5 shows an example in which all models including the one trained on the largest training set fail to recover a fine detail. This is a known problem in accelerated MRI and has been documented in both editions of the fast MRI challenge (Knoll et al., 2020a; Muckley et al., 2021) in which for all methods examples could be found in which fine details have not been recovered. However, the question remains if more training data or better models would help or the details are simply not there since the information is lost due to the large undersampling factors considered in the fastMRI challenges and in this work.
Next, we describe the experimental details. For compressed sensing in the context of accelerated MRI we trained U-Nets of 14 different sizes. We vary the number of channels in the first layer in {16, 32, 48, 64, 96, 112, 128, 144, 160, 176, 192, 208, 224, 256}, which corresponds to {2, 8, 18, 31, 70, 95, 124, 157, 193, 234, 279, 327, 380, 496} million network parameters. The exact training set sizes we consider are {0.05, 0.25, 0.5, 1, 2.5, 5, 10, 25, 50} thousand AXT2 weighted images from fastMRI multi-coil brain dataset (Zbontar et al., 2018), where AXT2 corresponds to all images of one type of contrast. We focused on images only from this type to make the statistics of our datasets as homogeneous as possible. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples. We use 4732 and 730 additional images for testing and validation.
We consider an acceleration factor of 4 meaning that we only measure 25% of the information. We obtain the 4 times undersampled measurements by masking the fully sampled measurement y with an equispaced mask with 8% center fractions meaning that in the center of y we take all the measurements and take the remaining ones at equispaced intervals.
We use structural similarity (SSIM) loss and RMSprop optimizer with α = 0.99 as this is the default in the fastMRI repository (Zbontar et al., 2018) and we found no improvement by replacing it with Adam. We do not put a limit on the amount of compute invested into training. We deploy an automated learning rate decay that starts at a learning rate of 10−3 and decays by a factor 0.1 if the validation SSIM has not improved by at least 10−4 for 5 epochs. Once the learning rate drops to 10−6 we stop the training after 10 additional epochs. Only for the largest training set sizes 25k,50k we found that an additional drop to 10−7 resulted in further performance gains. We use a batch size of 1.
For training set sizes up to 5k we train three models with random seeds and pick the best. For larger training set sizes we only run one seed, since the variance between runs decreased.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 4250 GPU hours for the experiments in Fig. 1 (c),(d).
A.3 EXPERIMENTAL DETAILS FOR EMPIRICAL SCALING LAWS FOR DENOISING WITH THE
SWINIR
In this Section, we give a detailed description of the experimental setup that led to our results for Gaussian denoising with a SwinIR presented in Fig.2 (a),(b) and Section 3.
In addition, Fig. 4 shows examples of reconstructions from different models along the performance curve in Fig. 2 (a). Despite of the best SwinIR clearly outperforming the best U-Net in terms of PSNR
it is difficult for the naked eye to notice large differences in the quality of the reconstructions. This indicates that for Gaussian denoising both models already operate in a regime, where improvements to be made are only marginal.
Next, we describe the experimental details. To obtain Fig. 2 we train the SwinIR for color denoising from Liang et al. (2021) on the same training sets mentioned in Appx. A.1 with {0.1, 0.3, 1, 10, 100} thousand images from ImageNet. Instead of center cropping the training images to 256× 256 we have to crop to 128× 128 pixels as larger input patches would make it computational infeasible for us to train large versions of the SwinIR. The largest SwinIR alone took over 2 months to train on 4 NVIDIA A40 GPUs.
We train 4 different network sizes with {3.7, 11.5, 41.8, 128.9} million parameters. We denote the four network sizes as small(S)/middle(M)/large(L)/huge(H).
The training details and network configurations are as follows. The default SwinIR for denoising (Liang et al., 2021) was proposed for a training set size of about 10k images, 11.5M network parameters and was trained with a batch size of 8 for T =1280 epochs, where the learning rate is halved at [0.5T ,0.75T , 0.875T , 0.9375T ] epochs. We keep the learning rate schedule but adjust the maximal number of epochs T according to the training set size. Table 1 shows batch size and maximal number of epochs for every experiment in Fig. 2. We did not optimize over the choice of the batch size but picked the batch size as prescribed by the availability of computational resources.
See Liang et al. (2021) for a detailed description of the SwinIR network architecture. We vary the network size by adjusting the number of residual Swin Transformer blocks, the number of Swin Transformers per block, the number of attention heads per Swin Transformer, the number of channels in the input embedding and the width of the fully connected layers in a Swin Transformer. Table 2 contains a summary of the settings. When scaling up the network size, we invested in the parameters that seemed to be most promising in the ablation studies in Liang et al. (2021).
The experiments were conducted on four NVIDIA A40 and four NVIDIA RTX A6000. We used about 13000 GPU hours for the tranformer experiments in Fig. 2. For training the models in parallel on multiple GPUs we utilize the torch.distributed package with the glow backend, instead of the faster nccl backend, which was unfortunately not available on our hardware at that time.
B BENCHMARKING OUR MODELS FOR GAUSSIAN IMAGE DENOISING
In this Section, we evaluate the models we trained for image denoising in Section 3 on four common test sets form the literature and show that the largest SwinIR trained on the largest dataset achieves new SOTA for all four test sets and the considered noise level.
In Fig. 2 in the main body we compared the performance of the U-Nets and the SwinIRs trained on subsets of ImageNet for image denoising. The models are evaluated on a test set sampled from ImageNet. We observed that while SwinIRs significantly outperform U-Nets, the performance gain from increasing the training set size slows already at moderate training set sizes for both architectures equally. However, there is still a moderat performance gain in scaling the models, and thus we expect the largest SwinIR trained on the largest dataset to outperform the original SwinIR from Liang et al. (2021). Our results in this section show that this is indeed the case.
In Table 3 we evaluate on the standard test sets for Gaussian color image denoising CBSD68 (Martin et al., 2001), Kodak24 (Franzen, 1999), McMaster (Zhang et al., 2011) and Urban100 (Huang et al., 2015). We observe a significant performance difference between the best U-Net (46.5M parameters, 100k training images) and the other transformer based methods. As expected, our largest SwinIR trained on the largest dataset SwinIR 100/H outperforms the original SwinIR, but also the SCUnet (Zhang et al., 2022) a later SOTA model that has been demonstrated to outperform the original SwinIR.
We also depict the gains for the SwinIR from just scaling up the network size and then from scaling up network size and training set size. While this led to a new SOTA for Gaussian image denoising, note that on the downside training the SwinIR 10/M, which is comparable to the original SwinIR, took about 2 weeks on 4 NVIDIA A40 gpus, while training the SwinIR 100/H took over 2 months.
C EMPIRICAL SCALING LAWS FOR IMAGE SUPER-RESOLUTION WITH A U-NET
In this Section, we consider the problem of super-resolution, i.e., estimating an high-resolution image from a low-resolution version of the image. This can be viewed as a compressive sensing problem, since we can view the super-resolution problem as reconstructing a signal x ∈ Rn from a downsampled version y = Ax, where the matrix A ∈ Rm×n implements a downsampling operation like bicubic downsampling, or blurring followed by downsampling. As first shown in the pioneering work of Dong et al. (2014) data driven neural networks trained end-to-end outperform classical model-based approaches (Gu et al., 2012; Michaeli & Irani, 2013; Timofte et al., 2013).
Dong et al. (2014) reports that for super-resolution with a simple three-layer CNN, the gains from very large training sets do not seem to be as impressive as in high-level vision problems like image classification. Liang et al. (2021) plot the super-resolution performance of the SOTA SwinIR model as a function of the training set size up to 3600 training examples for a fixed network size. When plotting their results on a logarithmic scale, we observe that the performance improvement follows a power-law. It is unclear, however, whether this power law slows beyond this relatively small number of images.
In this Section, we obtain scaling laws for super-resolution for a U-Net over a wide range of training set and network sizes, similar as for denoising and compressive sensing in the main body.
Datasets. We use the same training, validation and test sets as described in Appx. A.1 with training set sizes N ∈ [100, 100k] images from ImageNet. On top we add two larger training sets of size 300k and 600k. Instead of center cropping the training images to 256× 256 we follow the super-resolution experiments in Liang et al. (2021) and train on images cropped to 128 × 128 pixels. We consider super-resolution of factor 2 so the low-resolution images have size 64 × 64. The low-resolution images are obtained with the bicubic downsampling function of Python’s PIL.Image package.
Model variants and training. We train the same U-Net model used in Section 3 and described in Appx. A.1 but with only one block per encoder/decoder as this resulted in slightly better results than with two blocks. The network is trained end-to-end to map a coarse reconstruction, obtained through bicubic upsampling, to the residual between the coarse reconstruction and the high-resolution ground truth image.
We vary the number of the channels in the first layer in {8, 16, 32, 64, 128, 192, 256, 320, 448, 564}, which corresponds to {0.007, 0.025, 0.10, 0.40, 1.6, 3.6, 6.4, 10.0, 20.0, 31.2} million network parameters. We do not use any data augmentation, since it is unclear how to account for it in the number of training examples.
We use the ℓ1-loss and Adam optimizer with its default settings. For all experiments we find a good initial learning rate with the same annealing strategy as described in Appx. A.1. However, instead of picking the largest learning rate for which the validation loss does not diverge, we pick the second largest, which leads to slightly more stable results. We start the annealing with learning rate of 10−5. In the few cases, where our heuristic leads to a degenerated training curve, typically due to picking a significantly too small or too large learning rate, starting the annealing with a smaller learning rate of 10−6 resolves the problem.
We do not put a limit on the amount of compute invested into training. To this end, we deploy an automated learning rate decay that reduces the learning rate by 0.5 if the validation PSNR has not improved by at least 0.001 for 8 epochs. We stop the training once the validation loss did not improve for two consecutive learning rates. For training sets up to 10000 images we found that batch size of 1 works best. For larger training sets we use a batch size of 10.
For training set sizes up to 10000 we train 3 random seeds and pick the best run. As the variance between runs compared to the gain in performance between different training set sizes decreases with increasing training set size we only run one seed for larger training set sizes.
The experiments were conducted on four NVIDIA A40, four NVIDIA RTX A6000 and four NVIDIA Quadro RTX 6000 GPUs. We measure the time in GPU hours until the best epoch according to the validation loss resulting in about 1500 GPU hours for the experiments in Fig. 6.
Results and discussion. For each training set size, Fig. 6(a) shows the reconstruction performance in PSNR of the best model over all simulated network sizes in Fig. 6(b). Since the curves per training set size in Fig. 6(b) are relatively flat, further scaling up the network size is not expected to significantly improve the performance on the studied training sets. Here are the two main findings:
A linear power law with a scaling coefficient α = 0.0075 holds roughly up to training set sizes of about 30k images, and for training set sizes starting from 60k this slows to a linear power law with significantly smaller scaling coefficient α = 0.0029. With this slowed scaling law a training set size of 1.6B images would be required to increase performance by another 1dB (assuming the relation 32.05N0.0029 persists, which is likely to slow down even further).
While slowing already at a few tens of thousands of training images, the scaling laws for superresolution do not slow as early as those for denoising and compressive sensing (see Fig.1). This could be partially due training on image patches of size 128× 128 as opposed to size 256× 256 used for denoising.
D ADDITIONAL EMPIRICAL SCALING LAWS FOR GAUSSIAN DENOISING
In this Section, we extend our results for Gaussian denoising with a U-Net from Section 3 with two additional setups. Section D.1 considers fixing the noise sampled in each training epoch and Section D.2 investigates reducing the noise level from σz = 25 to σz = 15.
D.1 EMPIRICAL SCALING LAWS FOR DENOISING WITH FIXED NOISE
Our main results for denoising with a U-Net trained end-to-end discussed in Section 3 follow a setup in which the noise per training example is re-sampled in every training epoch. We choose this setup since it makes the best use of the available clean images. However, fixing the noise is also interesting since it is closer to a denoising setup in which the noise statistics are unknown, which is the case in some real-world noise removal problems. In such a problem, we would be given pairs of noisy and clean image, and could not synthesis new noisy images from the clean images.
In this section, we follow the same experimental setup from Appx. A.1 to simulate the performance of a U-Net for denoising with fixed noise for up to 100k training images (see Fig. 7). Compared to re-sampling the noise, we observe a drop in performance of about 0.3dB for small training set sizes. The performance difference at 10k images is reduced to 0.2dB resulting in a slightly steeper scaling law for moderate training set sizes. However, at around 10k training images the scaling of the performance of training with fixed noise also starts to flatten as it approaches the performance of re-sampling the noise during training. This indicates that if the noise statistics are unknown, more data is required to achieve the same performance as when they are known. However, in both cases the scaling with training set size slows already down at moderate training set sizes.
D.2 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER NOISE LEVEL
Our results for Gaussian denoising in Section 3 are for a fixed noise level of σz = 25. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with smaller noise level of σz = 15, in order to see how the scaling laws change. The results for both noise levels are depicted in Fig. 8.
We observe an improvement of about 2.3dB in PSNR, which is expected since the irreducible error decreases for smaller noise levels. We also observe that the scaling coefficient for the smaller noise level σz = 15 (i.e., α = 0.0026) is slightly steeper than that for the larger noise level σz = 25 (i.e., α = 0.0019). This coincides with the qualitative behavior of the curves for subspace denoising in Figure 3. Apart from that the curves are qualitatively similar, in that a initially steep power law is replaced by a slower one at around 6000 training images.
D.3 EMPIRICAL SCALING LAWS FOR DENOISING WITH A SMALLER PATCH SIZE
Our results for Gaussian denoising in Section 3 with a U-Net were obtained for a constant training patch size of 256× 256 pixels across all network and training set sizes. In this Section, we repeat the experiments for Gaussian denoising with a U-Net described in Appx. A.1 with a smaller patch size of 128× 128 pixels. The results for both patch sizes are depicted in Fig. 9. We observe that in the regime of large training set sizes, that we are primarily interested in, training onN patches of size 256×256 is more beneficial than training on 4N patches of size 128× 128. We therefore focus on patch size 256× 256 in the main body of this work.
E UNDERSTANDING SCALING LAWS FOR DENOISING THEORETICALLY - SUPPLEMENTARY RESULTS
In this section, we provide additional details on the statements in Section 5 on understanding scaling laws for denoising theoretically by studying a linear subspace denoising problem theoretically, and provide additional numerical results.
Recall that we consider a linear estimator of the form fW(y) = Wy, and measure performance in terms of the expected mean-squared reconstruction error (normalized by the latent signal dimension d):
R(W) = 1 d E [ ∥Wy − x∥22 ] = 1
d ∥(W − I)U∥2F + σ2z d ∥W∥2F . (1)
Above, expectation is over the joint distribution of (x,y), and the second equality follows from using that x = Uc, where c ∼ N (0, I) is Gaussian, and y = x + z, where the noise z ∼ N (0, σ2zI) is Gaussian.
The optimal linear estimator. The optimal linear estimator (i.e., the estimator that minimizes the risk defined in equation (1)) is given by W∗ = 11+σ2z UU
T . This follows from taking the gradient of the risk (1), setting it to zero, and solving for W. The estimator projects the data onto the subspace and shrinks towards zero, depending on the noise variance. The associated risk is R(W∗) = σ2z/(1 + σ 2 z).
Early-stopped empirical risk minimization. We consider the estimator that applies gradient descent to the empirical risk
L(W) = N∑ i=1 ∥Wyi − xi∥22 = ∥WY −X∥ 2 F , (2)
where X,Y ∈ Rn×N contain the training examples as columns, and early-stops after k iterations for regularization.
We next discuss the early-stopped estimator Wk in more detail. Fig. 10 numerically demonstrates the regularizing effect of early stopping gradient descent, where W∞ = XY† is the converged learned estimator (see Appx. G, Eq. (5)). We see that regularization is necessary for this estimator to perform well.
We next discuss Theorem 2 and the associated assumptions in more detail. Theorem considers the following regime: (i) the number of training examples obeys (d + nσ2z) log(n) ≤ N and (ii) N ≤ ξd/σ2z , for an arbitrary ξ, and (iii) N log(N) ≤ n. For this regime, the theorem guarantees that the risk of the optimally early-stopped estimator obeys, with high probability,
R(Wkopt) ≤ (8 + 2ξ)R(W∗) + c ( 1 +
nσ2z d
) log n (d+ nσ2z) log(n)
N + cξ
√ (d+ σ2zn) log n
N .
(3) The theorem looks similar to that for the PCA estimate (Theorem 1), in that the risk is a constant away from the optimal risk, with an error term that becomes small as (d+ nσ2z)/N becomes small. However, the error bound does not converge to R(W∗) as the number of training examples, N , converges to infinity. This is probably an artifact of our analysis, but it is unclear, at least to us, how to derive a substantially tighter bound. In our analysis (see appendix G), we balance two errors: One error decreases in k and is associated with the part of the signal projected into the subspace, and the second error increases with k and is associated with the orthogonal complement of the subspace. We choose the early-stopping time to optimally balance those two terms, which yields the stated bound (3).
Now with regards to the assumption: Assumption (iii) N log(N) ≤ n means we are in the highdimensional regime; we think this is somewhat closer to reality (for example for denoising a 512×512 image, this would require the number of training examples to be smaller than 250k), but we can derive an analogous bound for the regime N log(N) ≥ n, where the number of training examples is larger than the ambient dimension.
Assumption (i) (d+ nσ2z) log(n) ≤ N is relatively mild, as it is necessary to being able to somewhat accurately estimate the subspace; this assumption is also required for the PCA estimate.
Assumption (ii) N ≤ ξd/σ2z , for an arbitrary ξ, is not restrictive in that ξ can be arbitrarily large, we make this assumption only so that the theorem can be stated in a convenient way. However, assumption (ii) reveals a shortcoming of Theorem 2 which is that we cannot make the bound go to zero as N → ∞, since increasing ξ increases one term in the bound, and decreases another one.
E.1 ADDITIONAL NUMERICAL SIMULATIONS
In this Section we provide further numerical simulations for the PCA subspace estimator and the estimator learned with early stopped gradient descent discussed in Section 5, Theorem 1 and Theorem2. Similar to Fig. 3, Fig. 11 shows the risks R(Wkopt) and R(WPCA) as a function of the number of training examples N for varying values of the signal and ambient dimension d and n, while fixing all other model parameters. In the power law region we fit linear power laws with negative scaling coefficients α. We observe steeper power laws (larger |α|) for smaller ambient dimensions n and larger signal dimensions d. Also the scaling coefficients of the learned estimator consistently excel the coefficients from the PCA estimator.
F PROOF FOR THEOREM 1: RISK BOUND FOR PCA SUBSPACE ESTIMATION
We provide a bound on the risk of the estimator f(y) = WPCAy with WPCA = τÛÛT and τ = 11+σ2z . Recall that Û ∈ Rn×d contains the singular vectors corresponding to the d-leading
singular values of YYT . We define Û⊥ ∈ Rn×n−d as the orthogonal complement of Û. Starting from the risk expression given in equation (1) we obtain
R(WPCA) = 1
d ∥∥∥(τÛÛT − I)U∥∥∥2 F + 1 d τ2σ2z ∥∥∥ÛÛT∥∥∥2 F
= 1
d ∥∥∥((τ − 1)ÛÛT + Û⊥ÛT⊥)U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2 ∥∥∥ÛTU∥∥∥2 F + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= 1
d (τ − 1)2
( d− ∥∥∥ÛT⊥U∥∥∥2 F ) + 1 d ∥∥∥ÛT⊥U∥∥∥2 F + τ2σ2z
= (1− (τ − 1)2)1 d ∥∥∥ÛT⊥U∥∥∥2 F + (τ − 1)2 + σ2zτ2 = 1 + 2σ2z (1 + σ2z) 2 1 d ∥∥∥ÛT⊥U∥∥∥2 F + σ2z 1 + σ2z
(i) ≤ 1 + 2σ 2 z
(1 + σ2z) 2 ∥∥∥ÛT⊥U∥∥∥2 + σ2z1 + σ2z (ii)
≤ c 1 + 2σ 2 z
(1 + σ2z) 2
(d+ nσ2z) log(2n)
N + σ2z 1 + σ2z
≤ c (d+ nσ 2 z) log(n)
N + σ2z 1 + σ2z . (4)
Here, inequality (ii) follows from Section H.1 equation (34) and holds in the regime (d + nσ2z) log(n) ≤ N , for some constant c and with probability at least 1 − n−10 − 3e−d + e−n. This concludes the proof.
G PROOF OF THEOREM 2: RISK BOUND FOR EARLY STOPPED EMPIRICAL RISK MINIMIZATION
This section contains the proof of Theorem 2. The theorem characterizes the performance of the learned, linear estimator Wk that is obtained by applying k iterations of gradient descent with stepsize η starting at W0 = 0 to the loss L(W) in equation (2). We start by deriving a closed form expression for the estimator Wk. The gradient of the loss is
∇WL(W) = (WY −X)YT ,
and thus the iterations of gradient descent are
Wk+1 = Wk − η(WkY −X)YT
= Wk(I− ηYYT ) + ηXYT .
Let Y = UyΣyVTy ∈ Rn×N and X = UxΣxVTx ∈ Rn×N be the singular value decompositions of Y and X respectively and assume that the singular values are non-zero and descending, i.e., σy,1 ≥ σy,2 ≥ . . . . We have, with W0 = 0 and k ≥ 1 that
Wk = ηXYT k−1∑ ℓ=0 (I− ηYYT )ℓ
= ηXVyΣyU T y ( k−1∑ ℓ=0 (I− ηUyΣ2yUTy )ℓ )
= ηXVyΣydiag ( k−1∑ ℓ=0 (1− ησ2y,i)ℓ ) UTy
= XVyDkU T y ,
where we defined Dk ∈ RN×N as a diagonal matrix with i-th diagonal entry given by (1 − (1 − ησ2y,i) k)/σy,i and where we used the geometric series to obtain
k−1∑ ℓ=0 (1− ησ2y,i)ℓ = 1− (1− ησ2y,i)k ησ2y,i .
Note that for k → ∞ and choosing η such that 1− ησ2y,i < 1 for all i we get
W∞ = XVyΣy −1UTy = XY †. (5)
Evaluating the risk from equation (1) at the estimator Wk gives
R(Wk) = 1
d ∥∥(XVyDkUTy − I)U∥∥2F + σ2zd ∥∥XVyDkUTy ∥∥2F . (6) To shorten notation we define
γ := (d+ nσ2z) log(n)
N (7)
ψ := nσ2z log(n)
N . (8)
We next provide bounds for the two terms on the right-hand-side of equation (6), proven later in this section.
Bound on the first term in equation (6): In Section G.1 we show that provided N log(N) ≤ n, 9d ≤ N and (d+ nσ2z) log(n) ≤ N , for some constant c, with probability at least 1− 2e−N/18 − 3n−10 − 3e−d − e−n − e−N − 2e−n/2, the following bound holds:
1
d ∥∥(XVyDkUTy − I)U∥∥2F ≤ cd (σ2zn+ γσ2x,max) d∑
i=1
1
σ2y,i (1− (1− ησ2y,i)k)2 +
c
d d∑ i=1 (1− ησ2y,i)2k
+ c
d ψγ N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2 + cγ. (9)
Bound on the second term in equation (6): In Section G.2 we show
σ2z d ∥∥XVyDkUTy ∥∥2F ≤ σ2zd N∑ i=1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (10)
With equations (9) and (10) in place and by splitting up the sum in (10) we can bound the right hand side of equation (6) as
R(Wk) ≤ 1 d
( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) d∑ i=1 1 σ2y,i (1− (1− ησ2y,i)k)2 + c d d∑ i=1 (1− ησ2y,i)2k
+ cγ + c
d
( ψγ + σ2z ) N∑ i=d+1 σ2x,max σ2y,i (1− (1− ησ2y,i)k)2. (11)
Since the singular values of Y are in descending order σy,1 ≥ σy,2 ≥ . . ., this is, for any iteration k, bounded as
R(Wk) ≤ ( cσ2zn+ cγσ 2 x,max + σ 2 zσ 2 x,max ) 1 σ2y,d + c(1− ησ2y,d)2k
+ cγ + c
d
( ψγ + σ2z ) σ2x,max N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2. (12)
This is a good bound if we’re at an iteration sufficiently large so that (1− ησ2y,1)k is small.
In (12) we have (1− ησ2y,d)2k that is decreasing in the number of gradient descent steps k and also decreasing in the stepsize η as long as ησ2y,i ≤ 1 for i = 1, . . . , d. Further, we have (1−(1−ησ2y,i)k)2 that is increasing in k and also increasing in η. The first term corresponds to the signal that we want to fit sufficiently well, whereas the second term corresponds to the noise from which we want to fit as little as possible. Hence, there exist optimal choices for k, η that trade-off the sum of the two terms.
In our setup the d leading singular values of Y corresponding to the signal are large and concentrate around N(1 + σ2z), while the remaining singular values are small and concentrate around Nσ 2 z . Hence, we can apply a single step of gradient descent k = 1 to already fit a large portion of the signal, while minimizing the portion of the noise that is fitted. For that we choose the stepsize η as large as possible such that ησ2y,i ≤ 1 for i = 1, . . . , d still holds. Next, suppose the following events hold
E1 = {σ2y,d+1 ≤ N ( σ2z + ϵ(1 + σ 2 z) ) } (13)
E2 = {σ2y,d ≥ N(1 + σ2z)(1− ϵ)} (14) E3 = {σ2x,max ≤ 4N} (15) E4 = {σ2y,1 ≤ N(1 + ϵ)(1 + σ2z)}. (16)
In Section H.4, we show that
P [E3] ≥ 1− 2e−N/8. (17)
In Section H.4, we also show that, provided that N ≥ 3Cϵ−2(d+ σ2zn) log n, for some constant C and ϵ ∈ (0, 1),
P [E1] ,P [E2] ,P [E4] ≥ 1− e−d − e−n − n−9. (18)
We next bound the terms in equation (12). As discussed, we set k = 1 and the stepsize as large as possible, i.e. η = 1/ ( N(1 + ϵ)(1 + σ2z) ) ≤ 1/σ2y,1, which holds on event E4. Finally, on event E2 we obtain
c(1− ησ2y,d)2k ≤ c(1− ηN(1 + σ2z)(1− ϵ))2 = c ( 1− 1− ϵ
1 + ϵ
)2 ≤ cϵ2.
Next we bound the sum in equation (12). Towards this goal, we upper bound each term in the sum with its linear approximation at the origin. We compute the derivative at the origin as
lim q→0
∂
∂q
1 q (1− (1− ηq)k)2 = (ηk)2.
Thus, we have
N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2 ≤ N∑ i=d+1 k2η2σ2y,i ≤ Nk2η2σ2y,d+1 ≤ σ2z + ϵ+ ϵσ 2 z (1 + ϵ)2(1 + σ2z) 2 ≤ c(σ2z + 2ϵ),
on the event E1 and for σ2z ≤ 1, k = 1 and η = 1/ ( N(1 + ϵ)(1 + σ2z) ) . Putting this together and on the events E2, E3 we get the bound
R(Wk) ≤ 8 σ 2 z
1 + σ2z + cγ + c(1− ησ2y,d)2k +
c
d
( ψγ + σ2z ) N N∑ i=d+1 1 σ2y,i (1− (1− ησ2y,i)k)2
≤ 8 σ 2 z
1 + σ2z + cγ + cϵ2 +
cN
d
( σ2zn log n
N
(d+ nσ2z) log n
N + σ2z
) (σ2z + 2ϵ)
≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + cϵ2 + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
) (σ2z + 2ϵ).
(19)
The bound holds provided 3Cϵ−2(d+ σ2zn) log n ≤ N and N log(N) ≤ n, for some constants c, C and with probability at least 1− 2e−N/8 − 2e−N/18 − 5n−9 − 5e−d − 2e−n − e−N − 2e−n/2. To maximize the benefit from early stopping we set ϵ as small as possible with respect to the condition 3Cϵ−2(d+ σ2zn) log n ≤ N
ϵ =
√ 3C(d+ σ2zn) log n
N . (20)
With that equation (19) becomes
R(Wk) ≤ 8R(W∗) + c (d+ nσ 2 z) log(n)
N + c
( ((d+ nσ2z) log n) 2
dN + σ2zN d
)( σ2z + √ (d+ σ2zn) log n
N
)
= 8R(W | 1. What is the main contribution of the paper regarding neural network approaches for inverse/regression problems?
2. What are the strengths of the paper, particularly in its analysis and theoretical support?
3. What are the weaknesses of the paper regarding its clarity and relevance?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studied whether performance gains of neural network approaches for inverse/regression problems are obtained from scaling up the training set size. They reported their claims in three fields: image denoising, accelerated MRI, and super-resolution. They calculated reconstruction quality as a function of training set size, while optimally scaling the network size. The authors found that an initial performance increasement slowly downs already at moderate training set sizes(~10^3). They confirmed that even training on millions of images would not significantly improve performance over small-sized training. They concluded that once the error induced by learning the signal model is small relative to the error floor, more training examples do not improve performance.
Strengths And Weaknesses
(strength)
This paper analyzes the scalability of training dataset in linear inverse problems (denoising, super resolution, MRI acceleration). This aspect has never been analyzed significantly, so I believe this manuscript is first attempt to give a focus on the scalability of dataset in image restoration problem. This aspect is quite important in terms of intuitive instructions of building database. Furthermore, the authors try to elaborate such the scalability property with rigorous theories which are supported in supplementaries in details.
(weakness)
The way of writing from section 5 until conclusion is a bit distracted. This manuscript has a target to anlayze the data scalability, but their explanation from section 5 seems to be irrelevant with their claim. It would be easier than the current. For example, in Fig. 3, W^\infty is not explained in caption and section, rather authors provide a link to supplementery which is not self-contained. I believe there is a need to be compact writing.
Clarity, Quality, Novelty And Reproducibility
The overall flow of manuscript is well written and quality of writing is also fine. The novelty of the work is appealing and authors submitted their code in supplementary. |
ICLR | Title
AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks
Abstract
AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational costs. Here we propose AUTOTRANSFER, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AUTOTRANSFER on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AUTOTRANSFER significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-BANK-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
1 INTRODUCTION
Deep neural networks are highly modular, requiring many design decisions to be made regarding network architecture and hyperparameters. These design decisions form a search space that is nonconvex and costly even for experts to optimize over, especially when the optimization must be repeated from scratch for each new use case. Automated machine learning (AutoML) is an active research area that aims to reduce the human effort required for architecture design that usually covers hyperparameter optimization and neural architecture search. AutoML has demonstrated success (Zoph and Le, 2016; Pham et al., 2018; Zoph et al., 2018; Cai et al., 2018; He et al., 2018; Guo et al., 2020; Erickson et al., 2020; LeDell and Poirier, 2020) in many application domains.
Finding a reasonably good model for a new learning task1 in a computationally efficient manner is crucial for making deep learning accessible to domain experts with diverse backgrounds. Efficient AutoML is especially important in domains where the best architectures/hyperparameters are highly sensitive to the task. A notable example is the domain of graph learning2. First, graph learning methods receive input data composed of a variety of data types and optimize over tasks that span an equally diverse set of domains and modalities such as recommendation (Ying et al., 2018; He et al., 2020), physical simulation (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020), and bioinformatics (Zitnik et al., 2018). This differs from computer vision and natural language processing where the
1In this paper, we refer to a task as a given dataset with an evaluation metric/loss, e.g., cross-entropy loss on node classification on the Cora dataset.
2We focus on the graph learning domain in this paper. AUTOTRANSFER can be generalized to other domains.
input data has a predefined, fixed structure that can be shared across different neural architectures. Second, neural networks that operate on graphs come with a rich set of design choices and a large set of parameters to explore. However, unlike other domains where a few pre-trained architectures such as ResNet (He et al., 2016) and GPT-3 (Brown et al., 2020) dominate the benchmarks, it has been shown that the best graph neural network (GNN) design is highly task-dependent (You et al., 2020).
Although AutoML as a research domain is evolving fast, existing AutoML solutions have massive computational overhead when finding a good model for a new learning task is the goal. Most present AutoML techniques consider each task independently and in isolation, therefore they require redoing the search from scratch for each new task. This approach ignores the potentially valuable architectural design knowledge obtained from the previous tasks, and inevitably leads to a high computational cost. The issue is especially significant in the graph learning domain Gao et al. (2019); Zhou et al. (2019), due to the challenges of diverse task types and the huge design space that are discussed above.
Here we propose AUTOTRANSFER3, an AutoML solution that drastically improves AutoML architecture search by transferring previous architectural design knowledge to the task of interest. Our key innovation is to introduce a task-model bank that stores the performance of a diverse set of GNN architectures and tasks to guide the search algorithm. To enable knowledge transfer, we define a task embedding space such that tasks close in the embedding space have similar corresponding top-performing architectures. The challenge here is that the task embedding needs to capture the performance rankings of different architectures on different datasets, while being efficient to compute. Our innovation here is to embed a task by using the condition number of its Fisher Information Matrix of various randomly initialized models and also a learning scheme with an empirical generalization guarantee. This way we implicitly capture the properties of the learning task, while being orders of magnitudes faster (within seconds). We then estimate the design prior of desirable models for the new task, by aggregating design distributions on tasks that are close to the task of interest. Finally, we initiate a hyperparameter search algorithm with the task-informed design prior computed.
We evaluate AUTOTRANSFER on six datasets, including both node classification and graph classification tasks. We show that our proposed task embeddings can be computed efficiently and the distance measured between tasks correlates highly (0.43 Kendall correlation) with model performance rankings. Furthermore, we present AUTOTRANSFER significantly improves search efficiency when using the transferred design prior. AUTOTRANSFER reduces the number of explored architectures needed to reach a target accuracy by an order of magnitude compared to SOTA. Finally, we release GNN-BANK-101—the first large-scale database containing detailed performance records for 120,000 task-model combinations which were trained with 16,128 GPU hours—to facilitate future research.
2 RELATED WORK
In this section, we summarize the related work on AutoML regarding its applications on GNNs, the common search algorithms, and pioneering work regarding transfer learning and task embeddings.
AutoML for GNNs. Neural architecture search (NAS), a unique and popular form of AutoML for deep learning, can be divided into two categories: multi-trial NAS and one-shot NAS. During multi-trial NAS, each sampled architecture is trained separately. GraphNAS (Gao et al., 2020) and Auto-GNN (Zhou et al., 2019) are typical multi-trial NAS algorithms on GNNs which adopt an RNN controller that learns to suggest better sets of configurations through reinforcement learning. One-shot NAS (e.g., (Liu et al., 2018; Qin et al., 2021; Li et al., 2021)) involves encapsulating the entire model space in one super-model, training the super-model once, and then iteratively sampling sub-models from the super-model to find the best one. In addition, there is work that explicitly studies fine-grained design choices such as data augmentation (You et al., 2021), message passing layer type (Cai et al., 2021; Ding et al., 2021; Zhao et al., 2021), and graph pooling (Wei et al., 2021). Notably, AUTOTRANSFER is the first AutoML solution for GNNs that efficiently transfer design knowledge across tasks.
HPO Algorithms. Hyperparameter Optimization (HPO) algorithms search for the optimal model hyperparameters by iteratively suggesting a set of hyperparameters and evaluating their performance. Random search samples hyperparameters from the search space with equal probability. Despite not
3Source code is available at https://github.com/snap-stanford/AutoTransfer.
learning from previous trials, random search is commonly used for its simplicity and is much more efficient than grid search (Bergstra and Bengio, 2012). The TPE algorithm (Bergstra et al., 2011) builds a probabilistic model of task performance over the hyperparameter space and uses the results of past trials to choose the most promising next configuration to train, which the TPE algorithm defines as maximizing the Expected Improvement value (Jones, 2001). Evolutionary algorithms (Real et al., 2017; Jaderberg et al., 2017) train multiple models in parallel and replace poorly performing models with “mutated” copies of the current best models. AUTOTRANSFER is a general AutoML solution and can be applied in combination with any of these HPO algorithms.
Transfer Learning in AutoML. Wong et al. (2018) proposed to transfer knowledge across tasks by reloading the controller of reinforcement learning search algorithms. However, this method assumes that the search space on different tasks starts with the same learned prior. Unlike AUTOTRANSFER, it cannot address the core challenge in GNN AutoML: the best GNN design is highly task-specific. GraphGym (You et al., 2020) attempts to transfer the best architecture design directly with a metric space that measures task similarity. GraphGym (You et al., 2020) computes task similarity by training a set of 12 ”anchor models” to convergence which is computationally expensive. In contrast, AUTOTRANSFER designs light-weight task embeddings requiring minimal computations overhead. Additionally, Zhao and Bilen (2021); Li et al. (2021) proposes to conduct architecture search on a proxy subset of the whole dataset and later transfer the best searched architecture on the full dataset. Jeong et al. (2021) studies a similar setting in vision domain.
Task Embedding. There is prior research trying to quantify task embeddings and similarities. Similar to GraphGym, Taskonomy (Zamir et al., 2018) estimates the task affinity matrix by summarizing final losses/evaluation metrics using an Analytic Hierarchy Process (Saaty, 1987). From a different perspective, Task2Vec (Achille et al., 2019) generates task embeddings for a given task using the Fisher Information Matrix associated with a pre-trained probe network. This probe network is shared across tasks and allows Task2Vec to estimate the Fisher Information Matrix of different image datasets. Le et al. (2022) extends a similar idea to neural architecture search. The aforementioned task embeddings cannot be directly applied to GNNs as the inputs do not align across datasets. AUTOTRANSFER avoids the bottleneck by using asymptotic statistics of the Fisher Information Matrix with randomly initiated weights.
3 PROBLEM FORMULATION AND PRELIMINARIES
We first introduce formal definitions of data structures relevant to AUTOTRANSFER.
Definition 1 (Task) We denote a task as T = (D,L(·)), consisting of a datasetD and a loss function L(·) related to the evaluation metric.
For each training attempt on a task T (i), we can record its model architecture Mj , hyperparameters Hj , and corresponding value of loss lj , i.e., (Mj , Hj , lj). We propose to maintain a task-model bank to facilitate knowledge transfer to future novel tasks.
Definition 2 (Task-Model Bank) A task-model bank B is defined as a collection of tasks, each with multiple training attempts, in the form of B = {(T (i), {(M (i)j , H (i) j , l (i) j )})}.
AutoML with Knowledge Transfer. Suppose we have a task-model bank B. Given a novel task T (n) which has not been seen before, our goal is to quickly find a model that works reasonably well on the novel task by utilizing knowledge from the task-model bank.
In this paper, we focus on AutoML for graph learning tasks, though our developed technique is general and can be applied to other domains. We define the input graph as G = {V,E}, where V is the node set and E ⊆ V × V is the edge set. Furthermore, let y denote its output labels, which can be node-level, edge-level or graph-level. A GNN parameterized by weights θ outputs a posterior distribution P(G, y, θ) for label predictions.
4 PROPOSED SOLUTION: AUTOTRANSFER
In this section, we introduce the proposed AUTOTRANSFER solution. AUTOTRANSFER uses the task embedding space as a tool to understand the relevance of previous architectural designs to the target task. The designed task embedding captures the performance rankings of different architectures on different tasks while also being efficient to compute. We first introduce a theoretically motivated solution to extract a scale-invariant performance representation of each task-model pair. We use these representations to construct task features and further learn task embeddings. These embeddings form the task embedding space that we finally use during the AutoML search.
4.1 BASICS OF THE FISHER INFORMATION MATRIX (FIM)
Given a GNN defined above, its Fisher Information Matrix (FIM) F is defined as
F = EG,y[∇θ logP(G, y, θ)∇θ logP(G, y, θ)⊤].
which formally is the expected covariance of the scores with respect to the model parameters. There are two popular geometric views for the FIM. First, the FIM is an upper bound of the Hessian and coincides with the Hessian if the gradient is 0. Thus, the FIM characterizes the local landscape of the loss function near the global minimum. Second, similar to the Hessian, the FIM models the loss
landscape with respect not to the input space, but to the parameter space. In the information geometry view, if we add a small perturbation to the parameter space, we have
KL(P(G, y, θ)∥P(G, y, θ + dθ)) = dθ⊤Fdθ.
where KL(·, ·) stands for Kullback–Leibler divergence. It means that the parameter space of a model forms a Riemannian manifold and the FIM works as its Riemannian metric. The FIM thus allows us to quantify the importance of a model’s weights in a way that is applicable to different architectures.
4.2 FIM-BASED TASK FEATURES
Scale-invariant Representation of Task-Model Pairs. We aim to find a scale-invariant representation for each task-model pair which will form the basis for constructing task features. The major challenge in using the FIM to represent GNN performance is that graph datasets do not have a universal, fixed input structure, so it is infeasible to find a single pre-trained model and extract its FIM. However, training multiple networks poses a problem as the FIMs computed for different networks are not directly comparable. We choose to use multiple networks but additionally propose to use asymptotic statistics of the FIM associated with randomly initialized weights. The theoretical justification for the relationship between the asymptotic statistics of the FIM and the trainability of neural networks was studied in (Karakida et al., 2019; Pennington and Worah, 2018) to which we refer the readers. We hypothesize that such a measure of trainability encodes loss landscapes and generalization ability and thus correlates with final model performance on the task. Another issue that relates to input structures of graph datasets is that different models have different number of parameters. Despite some specially designed architectures, e.g., (Lee et al., 2019; Ma et al., 2019), most GNN architecture design can be represented as a sequence of pre-processing layers, message passing layers, and post-processing layers. Pre-process layers and post-process layers are Multilayer Perceptron (MLP) layers, of which the dimensions vary across different tasks due to different input/output structures. Message passing layers are commonly regarded as the key design for GNNs and the number of weight parameters can remain the same across tasks. In this light, we only consider the FIM with respect to the parameters in message passing layers so that the number of parameters considered stays the same for all datasets. We note that such formulation has its limitations, in the sense that it cannot cover all the GNN designs in the literature. We leave potential extensions with better coverage for future work. We further approximate the FIM by only considering the diagonal entries, which implicitly neglects the correlations between parameters. We note that this is common practice when analyzing the FIMs of deep neural networks, as the full FIM is massive (quadratic in the number of parameters) and infeasible to compute even on modern hardware. Similar to Pennington and Worah (2018), we consider the first two moments of FIM
m1 = 1
n tr[F ] and m2 =
1 n tr[F 2] (1)
and use α = m2/m21 as the scale-invariant representation. The computed α is lower bounded by 1 and captures how concentrated the spectrum is. A small α indicates the loss landscape is flat, and its corresponding model design enjoys fast first-order optimization and potentially better generalization. To encode label space information into each task, we propose to train only the last linear layer of each model on a given task, which can be done efficiently. The parameters in other layers are frozen after being randomly initialized. We take the average over R initializations to estimate the average ᾱ.
Constructing Task Features. We denote task features as measures extracted from each task that characterize its important traits. The design of task features should reflect our final objective: to use these features to identify similar tasks and transfer the best design distributions. Thus, we select U model designs as anchor models and concatenate the scale-invariant representations āu of each design as task features. To retain only the relative ranking among anchor model designs, we normalize the concatenated feature vector to a scale of 1. We let zf denote the normalized task feature.
4.3 FROM TASK FEATURES TO TASK EMBEDDINGS
The task feature zf introduced above can be regarded as a means of feature engineering. We construct the feature vector with domain knowledge, but there is no guarantee it functions as anticipated. We thus propose to learn a projection function g(·) : RU → RD that maps task feature zf to final task
embedding ze = g(zf ). We do not have any pointwise supervision that can be used as the training objective. Instead, we consider the metric space defined by GraphGym. The distance function in GraphGym - computed using the Kendall rank correlation between performance rankings of anchor models trained on the two compared tasks - correlates nicely with our desired knowledge transfer objective. It is not meaningful to enforce that task embeddings mimic GraphGym’s exact metric space, as GraphGym’s metric space can still contain noise, or does not fully align with the transfer objective. We consider a surrogate loss that enforces only the rank order among tasks. To illustrate, let us consider tasks T (i), T (j), T (k) and their corresponding task embeddings, z(i)e , z (j) e , z (k) e . Note
that ze is normalized to 1 so z (i) e ⊤ z (j) e measures the cosine similarity between tasks T (i) and T (j). Let dg(·, ·) denote the distance estimated by GraphGym. We want to enforce
z(i)e ⊤ z(j)e > z (i) e ⊤ z(k)e if dg(T (i), T (j)) < dg(T (i), T (k)).
To achieve this, we use the margin ranking loss as our surrogate supervised objective function:
Lr(z(i)e , z(j)e , z(k)e , y) = max(0,−y · (z(i)e ⊤ z(j)e − z(i)e ⊤ z(k)e ) + margin). (2)
Here if dg(T (i), T (j)) < dg(T (i), T (k)), then we have its corresponding label y = 1, and y = −1 otherwise. Our final task embedding space is then a FIM-based metric space with cosine distance function, where the distance is defined as de(T (i), T (j)) = 1− z(i)e ⊤ z (j) e . Please refer to the detailed training pipeline at Algorithm 2 in the Appendix.
4.4 AUTOML SEARCH ALGORITHM WITH TASK EMBEDDINGS
To transfer knowledge to a novel task, a naïve idea would be to directly carry over the best model configuration from the closest task in the bank. However, even a high Kendall rank correlation between model performance rankings of two tasks T (i), T (j) does not guarantee the best model configuration in task T (i) will also achieve the best performance on task T (j). In addition, since task similarities are subject to noise, this naïve solution may struggle when there exist multiple reference tasks that are all highly similar.
To make the knowledge transfer more robust to such failure cases, we introduce the notion of design distributions that depend on top performing model designs and propose to transfer design distributions rather than the best design configurations. Formally, consider a task T (i) in the taskmodel bank B, associated with its trials {(M (i)j , H (i) j , l (i) j )}. We can summarize its designs as a list of configurations C = {c1, . . . , cW }, such that all potential combinations of model architectures M and hyperparameters H fall under the Cartesian product of the configurations. For example, c1 could be the instantiation of aggregation layers, and c2 could be the start learning rate. We then define design distributions as random variables c1, c2, . . . , cW , each corresponding to a hyperparameter. Each random variable c is defined as the frequency distribution of the design choices used in the top K trials. We multiply all distributions for the individual configurations {c1, . . . , cW } to approximate the overall task’s design distribution P(C|T (i)) = ∏ w P(cw|T (i)).
During inference, given a novel task T (n), we select a close task subset S by thresholding task embedding distances, i.e., S = {T (i)|de(T (n), T (i)) ≤ dthres}. We then derive the transferred design prior Pt(C|T (n)) of the novel task by weighting design distributions from the close task subset S.
Pt(C|T (n)) = ∑ T (i)∈S 1 de(T (n),T (i)) P(C|T (i))∑
T (i)∈S 1
de(T (n),T (i))
. (3)
The inferred design prior for the novel task can then be used to guide various search algorithms. The most natural choice for a few trial regime is random search. Rather than sampling each design configuration following a uniform distribution, we propose to sample from the task-informed design prior Pt(C|T (n)). Please refer to Appendix A to check how we augment other search algorithms. For AUTOTRANSFER, we can preprocess the task-model bank B into Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))} as our pipeline only requires using task embedding z(i)e and design distribution P(C|T (i)) rather than detailed training trials. A detailed search pipeline is summarized in Algorithm 1.
Algorithm 1 Summary of AUTOTRANSFER search pipeline
Require: A processed task-model bank Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))}, a novel task T (n) = ( D(n),L(n)(·) ) , U anchor models M1, ...,MU , R specifies the number of repeats.
1: for u = 1 to U do 2: for r = 1 to R do 3: Initialize weights θ for anchor model Mu randomly 4: Estimate FIM F ←− ED[∇θ logP(Mu, y, θ)∇θ logP(Mu, y, θ)⊤] 5: Extract scale-invariant representation a(v)u ←− m2/m21 following Eq. 1 6: end for 7: āu ←− mean(a(1)u , a(2)u , ..., a(V )u ) 8: end for 9: z(n)f ←− concat(ā1, ā2, ..., āU )
10: z(n)e ←− g(z(n)f ) 11: Select close task subset S ←− {T (i)|1− z(n)e ⊤ z (i) e ≤ dthres} 12: Get design prior Pt(C|T (n)) by aggregating subset S following Eq. 3 13: Start a HPO search algorithm with the task-informed design prior Pt(C|T (n))
5 EXPERIMENTS
5.1 EXPERIMENTAL SETUP
Task-Model Bank: GNN-BANK-101.
To facilitate AutoML research with knowledge transfer, we collected GNN-BANK-101 as the first large-scale graph database that records reproducible design configurations and detailed training performance on a variety of tasks. Specifically, GNN-BANK-101 currently includes six tasks for node classification (AmazonComputers (Shchur et al., 2018), AmazonPhoto (Shchur et al., 2018), CiteSeer (Yang et al., 2016), CoauthorCS (Shchur et al., 2018), CoauthorPhysics (Shchur et al., 2018), Cora (Yang et al., 2016)) and six tasks for graph classification (PROTEINS (Ivanov et al., 2019), BZR (Ivanov et al., 2019), COX2 (Ivanov et al., 2019), DD (Ivanov et al., 2019), ENZYMES (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019)). Our design space follows (You et al., 2020), and we extend the design space to include various commonly adopted graph convolution and activation layers. We extensively run 10,000 different models for each task, leading to 120,000 total task-model combinations, and record all training information including train/val/test loss.
Benchmark Datasets. We benchmark AUTOTRANSFER on six different datasets following prior work (Qin et al., 2021). Our datasets include three standard node classification datasets (CoauthorPhysics (Shchur et al., 2018), CoraFull (Bojchevski and Günnemann, 2017) and OGB-Arxiv (Hu et al., 2020)), as well as three standard benchmark graph classification datasets, (COX2 (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019) and PROTEINS (Ivanov et al., 2019)). CoauthorPhysics and CoraFull are transductive node classification datasets, so we randomly assign nodes into train/valid/test sets following a 50%:25%:25% split (Qin et al., 2021). We randomly split graphs following a 80%:10%:10% split for the three graph classification datasets (Qin et al., 2021). We follow the default train/valid/test split for the OGB-Arxiv dataset (Hu et al., 2020). To make sure there is no information leakage, we temporarily remove all records related to the task from our task-model bank if the dataset we benchmark was collected in the task-model bank.
Baselines. We compare our methods with the state-of-the-art approaches for GNN AutoML. We use GCN and GAT with default architectures following their original implementation as baselines. For multi-trial NAS methods, we consider GraphNAS (Gao et al., 2020). For one-shot NAS methods, we include DARTS (Liu et al., 2018) and GASSO (Qin et al., 2021). GASSO is designed for transductive settings, so we omit it for graph classification benchmarks. We further provide results of HPO algorithms based on our proposed search space as baselines: Random, Evolution, TPE (Bergstra et al., 2011) and HyperBand (Li et al., 2017).
We by default allow searching 30 trials maximum for all the algorithms, i.e., an algorithm can train 30 different models and collect the model with the best accuracy. We use the default setting for one-shot
NAS algorithms (DARTS and GASSO), as they only train a super-model once and can efficiently evaluate different architectures. We are mostly interested in studying the few-trial regime where most advanced search algorithms degrade to random search. Thus we additionally include a random search (3 trials) baseline where we pick the best model out of only 3 trials.
5.2 EXPERIMENTS ON SEARCH EFFICIENCY
We evaluate AUTOTRANSFER by reporting the average best test accuracy among all trials considered over ten runs of each algorithm in Table 1. The test accuracy collected for each trial is selected at the epoch with the best validation accuracy. By comparing results from random search (3 trials) and AUTOTRANSFER (3 trials), we show that our transferred task-informed design prior significantly improves test accuracy in the few-trial regime, and can be very useful in environments that are where computationally constrained. Even if we increase the number of search trials to 30, AUTOTRANSFER still demonstrates non-trivial improvement compared with TPE, indicating that our proposed pipeline has advantages even when computational resources are abundant. Notably, with only 3 search trials, AUTOTRANSFER surpasses most of the baselines, even those that use 30 trials.
To better understand the sample efficiency of AUTOTRANSFER, we plot the best test accuracy found at each trial in Figure 3 for OGB-Arxiv and TU-PROTEINS datasets. We notice that the advanced search algorithms (Evolution and TPE) have no advantages over random search at the few-trial regime since the amount of prior search data is not yet sufficient to infer potentially better design configurations. On the contrary, by sampling from the transferred design prior, AUTOTRANSFER achieves significantly better average test accuracy in the first few trials. The best test accuracy at trial 3 of AUTOTRANSFER surpasses its counterpart at trial 10 for every other method.
5.3 ANALYSIS OF TASK EMBEDDINGS
Qualitative analysis of task features. To examine the quality of the proposed task features, we visualize the proposed task similarity matrix (Figure 4 (b)) along with the task similarity matrix (Figure 4 (a)) proposed in GraphGym. We show that our proposed task similarity matrix captures similar patterns as GraphGym’s task similarity matrix while being computed much more efficiently
2Results come from the original paper (Qin et al., 2021).
by omitting training. We notice that the same type of tasks, i.e., node classification and graph classification, share more similarities within each group. As a sanity check, we examined that the closest task in the bank with respect to CoraFull is Cora. The top 3 closest tasks for OGB-Arxiv are AmazonComputers, AmazonPhoto, and CoauthorPhysics, all of which are node classification tasks.
Generalization of projection function g(·). To show the proposed projection function g(·) can generate task embeddings that can generalize to novel tasks, we conduct leave-one-out cross validation with all tasks in our task-model bank. Concretely, for each task considered as a novel task T (n), we use the rest of the tasks, along with their distance metric dg(·, ·) estimated by GraphGym’s exact but computationally expensive metric space, to train the projection function g(·). We calculate Kendall rank correlation over task similarities for Task Feature (without g(·)) and Task Embedding (with g(·)) against the exact task similarities. The average rank correlation and the standard deviation over ten runs is shown on Figure 4 (c). We find that with the proposed g(·), our task embeddings indeed correlate better with the exact task similarities, and therefore, generalize better to novel tasks.
Ablation study on alternative task space design. To demonstrate the superiority of the proposed task embedding, we further compare it with alternative task features. Following prior work (Yang et al., 2019), we use the normalized losses over the first 10 steps as the task feature. The results on OGB-Arxiv are shown in Table 2. Compared to AUTOTRANSFER’s task embedding, the task feature induced by normalized losses has a lower ranking correlation with the exact metric and yields worse performance. Table 2 further justifies the efficacy of using the Kendall rank correlation as the metric for task embedding quality, as higher Kendall rank correlation leads to better performance.
6 CONCLUSION
In this paper, we study how to improve AutoML search efficiency by transferring existing architectural design knowledge to novel tasks of interest. We introduce a task-model bank that captures the performance over a diverse set of GNN architectures and tasks. We also introduce a computationally efficient task embedding that can accurately measure the similarity between different tasks. We release GNN-BANK-101, a large-scale database that records detailed GNN training information of 120,000 task-model combinations. We hope this work can facilitate and inspire future research in efficient AutoML to make deep learning more accessible to a general audience.
ACKNOWLEDGEMENTS
We thank Xiang Lisa Li, Hongyu Ren, Yingxin Wu for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-10171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
A ADDITIONAL IMPLEMENTATION DETAILS
Runtime analysis. We have empirically shown that AUTOTRANSFER can significantly improve search efficiency by reducing the number of trials needed to achieve reasonably good accuracy. The only overhead we introduced is the procedure of estimating task embeddings. Since we use a randomly initialized architecture, extracting each task feature only requires at most one forward and a few backward passes from a single minibatch of data. The wall-clock time depends on the size of the network and data structure. In our experiments, it typically takes a few seconds on an NVIDIA T4 GPU. We repeat the task feature extraction process 5 times for each anchor model, and for a total of 12 anchor models. Thus, the wall-clock time of the overhead for computing task embedding of novel task is within a few minutes. We note the length of this process is generally comparable to one trial of training on a small-sized dataset, and the time saved is much more significant for large-scale datasets.
Details for task-model-bank training. Our GNN model specifications are summarized in Table 3. Our code base was developed based on GraphGym (You et al., 2020). For all the training trials, We use the Adam optimizer and cosine learning rate scheduler (annealed to 0, no restarting). We use L2 regularization with a weight decay of 5e-4. We record losses and accuracies for training, validation and test splits every 20 epochs.
Training details for AUTOTRANSFER. We summarize the training procedure for the projection function g(·) in Algorithm 2. We set U = 12 and R = 5 throughout the paper. We use the same set of anchor model designs as those in GraphGym. We use a two-layer MLP with a hidden dimension of 16 to parameterize the projection function g(·). We use the Adam optimizer with a learning rate of 5e-3. We use margin = 0.1 and train the network for 1000 iterations with a batch size of 128. We adopt K = 16 when selecting the top K trials that are used to summarize design distributions.
Details for adapting TPE, evolution algorithms. We hereby illustrate how to compose the search algorithms with the transferred design priors. TPE is a Bayesian hyperparameter optimization method, meaning that it is initialized with a prior distribution to model the search space and updates the prior as it evaluates hyperparemeter configurations and records their performance. We replace this prior distribution with the task-informed design priors. As evolutionary algorithms generally initialize a large population and iteratively prune and mutate existing networks, we replace the random network initialization with the task-informed design priors. As we mainly focus on the few trial search regime, we set the fully random warm-up trials to 5 for both TPE and evolution algorithms.
B ADDITIONAL DISCUSSION
Limitations. In principle, AUTOTRANSFER leverages the correlation between model performance rankings among tasks to efficiently construct model priors. Thus, it is less effective if the novel task has large task distances with respect to all tasks in the task-model bank. In practice, users can continuously add additional search trials to the bank. As the bank size grows, it will be less likely that a novel task has a low correlation with all tasks in the bank.
Algorithm 2 Training Pipeline for the projection function g(·)
Require: Task features {z(i)f |T (i)} extracted for each task from the task-model bank. Distance measure dg(·, ·) estimated in GraphGym.
1: for each iteration do 2: Sample T (i), T (j), T (k) 3: z(i)e , z (j) e , z (k) e ←− g(z(i)f ), g(z (j) f ), g(z (k) f ) 4: y ←− 1 if dg(T (i), T (j)) < dg(T (i), T (k)) else −1 5: Optimize objective function Lr(z(i)e , z(j)e , z(k)e , y) in Eq. 2 6: end for
Social Impact. Our long-term goal is to provide a seamless GNN infrastructure that simplifies the training and deployment of ML models on structured data. Efficient and robust AutoML algorithms are crucial to making deep learning more accessible to people who are interested but lack deep learning expertise as well as those who lack the massive computational budget AutoML traditionally requires. We believe this paper is an important step to provide AI tools to a broader population and thus allow for AI to help enhance human productivity. The datasets we used for experiments are among the most widely-used benchmarks, which should not contain any undesirable bias. Besides, training/test losses and accuracies are highly summarized statistics that we believe should not incur potential privacy issues.
C ADDITIONAL RESULTS
Search efficiency. We summarize the average number of trials needed to surpass the average best accuracy found by TPE with 30 trials in Table 4. We show that AUTOTRANSFER reduces the number of explored architectures by an order of magnitude.
Ablation study on number of anchor models and task embedding design. We empirically demonstrate how the number of anchor models affect the rank correlation in Table 5. While 3 anchor models are not enough to capture the task distance, we found that 9 and 12 have a satisfactory trade-off between capturing task distance and computational efficiency. Furthermore, we empirically in Table 6 demonstrate that the learned task embedding space is superior to the proposed task feature space in terms of correlation as well as final search performance.
Visualizing model designs. We visualized the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset in Figure 5, as well as Coauthor-Physics dataset in Figure 6. We could observe that the transferred design distributions have a positive correlation on most of the design choices.
Figure 5: We plot the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset.
1 2 0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.layers_pre_mp Transferred Design Prior Ground Truth Prior
2 4 6 8 0.0
0.1
0.2
0.3
0.4
gnn.layers_mp Transferred Design Prior Ground Truth Prior
2 3 0.0
0.2
0.4
0.6
0.8
gnn.layers_post_mp Transferred Design Prior Ground Truth Prior
ski pco
nca t
ski psu
m sta
ck 0.0
0.2
0.4
0.6
0.8
1.0 gnn.stage_type Transferred Design Prior Ground Truth Prior
ad d
ma x
me an
0.0
0.1
0.2
0.3
0.4
gnn.agg Transferred Design Prior Ground Truth Prior
25 6 64
0.0
0.2
0.4
0.6
0.8
gnn.dim_inner Transferred Design Prior Ground Truth Prior
ga tco
nv
ga tco
nv 2h
ea d
ga tco
nv 4h
ea d
ge ne
ral con
v
gin con
v
sag eco
nv
gcn con
v 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
gnn.layer_type Transferred Design Prior Ground Truth Prior
elu
lre lu_
01 pre lu rel u
0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.act Transferred Design Prior Ground Truth Prior
0.0 01 0.0
1 0.0
0.1
0.2
0.3
0.4
0.5
0.6
optim.base_lr Transferred Design Prior Ground Truth Prior
16 00 20 0 80
0 0.0
0.1
0.2
0.3
0.4
0.5 optim.max_epoch Transferred Design Prior Ground Truth Prior
Figure 6: We plot the transferred design distributions and ground truth design distributions on the Coauthor-Physics dataset. | 1. What is the focus and contribution of the paper on NAS for GNN?
2. What are the strengths of the proposed approach, particularly in utilizing FIM and ranking loss?
3. What are the weaknesses of the paper regarding its comparisons with other works and the consideration of different parts of GNN?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper that the reviewer did not address explicitly? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studied NAS algorithm for GNN. The author proposed to transfer the prior architectural design knowledge to the novel task of interest. The author first extracts scale-invariant measures based on Fisher Information Matrix (FIM). The scale-invariant measures are used to form the task feature, which are further projected to task embeddings. The task embeddings are learned via a ranking loss built on top of GraphGym's extact metric space. Experiments justified the importance of the task embeddings in the AutoTransferer algorithm. The AutoTransferer also outperforms baselines like GraphNAS,
Strengths And Weaknesses
Strength
The idea of utilizing FIM to produce task-feature and further finetune it with ranking loss is novel.
Ablation analysis justifies the effectiveness of the task embedding algorithm proposed by the paper.
Weaknesses
It seems that "[AAAI2021] One-shot graph neural architecture search with dynamic search space" also discussed task transfer. The author may need to compare with one-shot NAS or clarify the difference.
The author pointed out that there are three stages in GNN architectures: 1) pre-processing, 2) message-passing, 3) post-processing. The author only considered message-passing layers and pointed out that it is commonly regarded as the most important part of GNN. However, since different tasks adopt different pre-/post-processing modules, it is not convincing to directly state tat message-passing is the most important.
It will be good if the author can list the performance of SOTA algorithm in each dataset discussed in Table 1. This will help readers understand the gap between NAS models and human-engineered SOTA architectures.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is easy-to-understand. Quality: Overall, the paper is technically sound. Weaknesses pointed out in the previous section. Novelty: To the best of the reviewer's knowledge, the algorithm of obtaining task embedding is novel Reproducibility: The reviewer has not checked reproducibility but the author has provided enough details in the appendix. |
ICLR | Title
AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks
Abstract
AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational costs. Here we propose AUTOTRANSFER, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AUTOTRANSFER on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AUTOTRANSFER significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-BANK-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
1 INTRODUCTION
Deep neural networks are highly modular, requiring many design decisions to be made regarding network architecture and hyperparameters. These design decisions form a search space that is nonconvex and costly even for experts to optimize over, especially when the optimization must be repeated from scratch for each new use case. Automated machine learning (AutoML) is an active research area that aims to reduce the human effort required for architecture design that usually covers hyperparameter optimization and neural architecture search. AutoML has demonstrated success (Zoph and Le, 2016; Pham et al., 2018; Zoph et al., 2018; Cai et al., 2018; He et al., 2018; Guo et al., 2020; Erickson et al., 2020; LeDell and Poirier, 2020) in many application domains.
Finding a reasonably good model for a new learning task1 in a computationally efficient manner is crucial for making deep learning accessible to domain experts with diverse backgrounds. Efficient AutoML is especially important in domains where the best architectures/hyperparameters are highly sensitive to the task. A notable example is the domain of graph learning2. First, graph learning methods receive input data composed of a variety of data types and optimize over tasks that span an equally diverse set of domains and modalities such as recommendation (Ying et al., 2018; He et al., 2020), physical simulation (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020), and bioinformatics (Zitnik et al., 2018). This differs from computer vision and natural language processing where the
1In this paper, we refer to a task as a given dataset with an evaluation metric/loss, e.g., cross-entropy loss on node classification on the Cora dataset.
2We focus on the graph learning domain in this paper. AUTOTRANSFER can be generalized to other domains.
input data has a predefined, fixed structure that can be shared across different neural architectures. Second, neural networks that operate on graphs come with a rich set of design choices and a large set of parameters to explore. However, unlike other domains where a few pre-trained architectures such as ResNet (He et al., 2016) and GPT-3 (Brown et al., 2020) dominate the benchmarks, it has been shown that the best graph neural network (GNN) design is highly task-dependent (You et al., 2020).
Although AutoML as a research domain is evolving fast, existing AutoML solutions have massive computational overhead when finding a good model for a new learning task is the goal. Most present AutoML techniques consider each task independently and in isolation, therefore they require redoing the search from scratch for each new task. This approach ignores the potentially valuable architectural design knowledge obtained from the previous tasks, and inevitably leads to a high computational cost. The issue is especially significant in the graph learning domain Gao et al. (2019); Zhou et al. (2019), due to the challenges of diverse task types and the huge design space that are discussed above.
Here we propose AUTOTRANSFER3, an AutoML solution that drastically improves AutoML architecture search by transferring previous architectural design knowledge to the task of interest. Our key innovation is to introduce a task-model bank that stores the performance of a diverse set of GNN architectures and tasks to guide the search algorithm. To enable knowledge transfer, we define a task embedding space such that tasks close in the embedding space have similar corresponding top-performing architectures. The challenge here is that the task embedding needs to capture the performance rankings of different architectures on different datasets, while being efficient to compute. Our innovation here is to embed a task by using the condition number of its Fisher Information Matrix of various randomly initialized models and also a learning scheme with an empirical generalization guarantee. This way we implicitly capture the properties of the learning task, while being orders of magnitudes faster (within seconds). We then estimate the design prior of desirable models for the new task, by aggregating design distributions on tasks that are close to the task of interest. Finally, we initiate a hyperparameter search algorithm with the task-informed design prior computed.
We evaluate AUTOTRANSFER on six datasets, including both node classification and graph classification tasks. We show that our proposed task embeddings can be computed efficiently and the distance measured between tasks correlates highly (0.43 Kendall correlation) with model performance rankings. Furthermore, we present AUTOTRANSFER significantly improves search efficiency when using the transferred design prior. AUTOTRANSFER reduces the number of explored architectures needed to reach a target accuracy by an order of magnitude compared to SOTA. Finally, we release GNN-BANK-101—the first large-scale database containing detailed performance records for 120,000 task-model combinations which were trained with 16,128 GPU hours—to facilitate future research.
2 RELATED WORK
In this section, we summarize the related work on AutoML regarding its applications on GNNs, the common search algorithms, and pioneering work regarding transfer learning and task embeddings.
AutoML for GNNs. Neural architecture search (NAS), a unique and popular form of AutoML for deep learning, can be divided into two categories: multi-trial NAS and one-shot NAS. During multi-trial NAS, each sampled architecture is trained separately. GraphNAS (Gao et al., 2020) and Auto-GNN (Zhou et al., 2019) are typical multi-trial NAS algorithms on GNNs which adopt an RNN controller that learns to suggest better sets of configurations through reinforcement learning. One-shot NAS (e.g., (Liu et al., 2018; Qin et al., 2021; Li et al., 2021)) involves encapsulating the entire model space in one super-model, training the super-model once, and then iteratively sampling sub-models from the super-model to find the best one. In addition, there is work that explicitly studies fine-grained design choices such as data augmentation (You et al., 2021), message passing layer type (Cai et al., 2021; Ding et al., 2021; Zhao et al., 2021), and graph pooling (Wei et al., 2021). Notably, AUTOTRANSFER is the first AutoML solution for GNNs that efficiently transfer design knowledge across tasks.
HPO Algorithms. Hyperparameter Optimization (HPO) algorithms search for the optimal model hyperparameters by iteratively suggesting a set of hyperparameters and evaluating their performance. Random search samples hyperparameters from the search space with equal probability. Despite not
3Source code is available at https://github.com/snap-stanford/AutoTransfer.
learning from previous trials, random search is commonly used for its simplicity and is much more efficient than grid search (Bergstra and Bengio, 2012). The TPE algorithm (Bergstra et al., 2011) builds a probabilistic model of task performance over the hyperparameter space and uses the results of past trials to choose the most promising next configuration to train, which the TPE algorithm defines as maximizing the Expected Improvement value (Jones, 2001). Evolutionary algorithms (Real et al., 2017; Jaderberg et al., 2017) train multiple models in parallel and replace poorly performing models with “mutated” copies of the current best models. AUTOTRANSFER is a general AutoML solution and can be applied in combination with any of these HPO algorithms.
Transfer Learning in AutoML. Wong et al. (2018) proposed to transfer knowledge across tasks by reloading the controller of reinforcement learning search algorithms. However, this method assumes that the search space on different tasks starts with the same learned prior. Unlike AUTOTRANSFER, it cannot address the core challenge in GNN AutoML: the best GNN design is highly task-specific. GraphGym (You et al., 2020) attempts to transfer the best architecture design directly with a metric space that measures task similarity. GraphGym (You et al., 2020) computes task similarity by training a set of 12 ”anchor models” to convergence which is computationally expensive. In contrast, AUTOTRANSFER designs light-weight task embeddings requiring minimal computations overhead. Additionally, Zhao and Bilen (2021); Li et al. (2021) proposes to conduct architecture search on a proxy subset of the whole dataset and later transfer the best searched architecture on the full dataset. Jeong et al. (2021) studies a similar setting in vision domain.
Task Embedding. There is prior research trying to quantify task embeddings and similarities. Similar to GraphGym, Taskonomy (Zamir et al., 2018) estimates the task affinity matrix by summarizing final losses/evaluation metrics using an Analytic Hierarchy Process (Saaty, 1987). From a different perspective, Task2Vec (Achille et al., 2019) generates task embeddings for a given task using the Fisher Information Matrix associated with a pre-trained probe network. This probe network is shared across tasks and allows Task2Vec to estimate the Fisher Information Matrix of different image datasets. Le et al. (2022) extends a similar idea to neural architecture search. The aforementioned task embeddings cannot be directly applied to GNNs as the inputs do not align across datasets. AUTOTRANSFER avoids the bottleneck by using asymptotic statistics of the Fisher Information Matrix with randomly initiated weights.
3 PROBLEM FORMULATION AND PRELIMINARIES
We first introduce formal definitions of data structures relevant to AUTOTRANSFER.
Definition 1 (Task) We denote a task as T = (D,L(·)), consisting of a datasetD and a loss function L(·) related to the evaluation metric.
For each training attempt on a task T (i), we can record its model architecture Mj , hyperparameters Hj , and corresponding value of loss lj , i.e., (Mj , Hj , lj). We propose to maintain a task-model bank to facilitate knowledge transfer to future novel tasks.
Definition 2 (Task-Model Bank) A task-model bank B is defined as a collection of tasks, each with multiple training attempts, in the form of B = {(T (i), {(M (i)j , H (i) j , l (i) j )})}.
AutoML with Knowledge Transfer. Suppose we have a task-model bank B. Given a novel task T (n) which has not been seen before, our goal is to quickly find a model that works reasonably well on the novel task by utilizing knowledge from the task-model bank.
In this paper, we focus on AutoML for graph learning tasks, though our developed technique is general and can be applied to other domains. We define the input graph as G = {V,E}, where V is the node set and E ⊆ V × V is the edge set. Furthermore, let y denote its output labels, which can be node-level, edge-level or graph-level. A GNN parameterized by weights θ outputs a posterior distribution P(G, y, θ) for label predictions.
4 PROPOSED SOLUTION: AUTOTRANSFER
In this section, we introduce the proposed AUTOTRANSFER solution. AUTOTRANSFER uses the task embedding space as a tool to understand the relevance of previous architectural designs to the target task. The designed task embedding captures the performance rankings of different architectures on different tasks while also being efficient to compute. We first introduce a theoretically motivated solution to extract a scale-invariant performance representation of each task-model pair. We use these representations to construct task features and further learn task embeddings. These embeddings form the task embedding space that we finally use during the AutoML search.
4.1 BASICS OF THE FISHER INFORMATION MATRIX (FIM)
Given a GNN defined above, its Fisher Information Matrix (FIM) F is defined as
F = EG,y[∇θ logP(G, y, θ)∇θ logP(G, y, θ)⊤].
which formally is the expected covariance of the scores with respect to the model parameters. There are two popular geometric views for the FIM. First, the FIM is an upper bound of the Hessian and coincides with the Hessian if the gradient is 0. Thus, the FIM characterizes the local landscape of the loss function near the global minimum. Second, similar to the Hessian, the FIM models the loss
landscape with respect not to the input space, but to the parameter space. In the information geometry view, if we add a small perturbation to the parameter space, we have
KL(P(G, y, θ)∥P(G, y, θ + dθ)) = dθ⊤Fdθ.
where KL(·, ·) stands for Kullback–Leibler divergence. It means that the parameter space of a model forms a Riemannian manifold and the FIM works as its Riemannian metric. The FIM thus allows us to quantify the importance of a model’s weights in a way that is applicable to different architectures.
4.2 FIM-BASED TASK FEATURES
Scale-invariant Representation of Task-Model Pairs. We aim to find a scale-invariant representation for each task-model pair which will form the basis for constructing task features. The major challenge in using the FIM to represent GNN performance is that graph datasets do not have a universal, fixed input structure, so it is infeasible to find a single pre-trained model and extract its FIM. However, training multiple networks poses a problem as the FIMs computed for different networks are not directly comparable. We choose to use multiple networks but additionally propose to use asymptotic statistics of the FIM associated with randomly initialized weights. The theoretical justification for the relationship between the asymptotic statistics of the FIM and the trainability of neural networks was studied in (Karakida et al., 2019; Pennington and Worah, 2018) to which we refer the readers. We hypothesize that such a measure of trainability encodes loss landscapes and generalization ability and thus correlates with final model performance on the task. Another issue that relates to input structures of graph datasets is that different models have different number of parameters. Despite some specially designed architectures, e.g., (Lee et al., 2019; Ma et al., 2019), most GNN architecture design can be represented as a sequence of pre-processing layers, message passing layers, and post-processing layers. Pre-process layers and post-process layers are Multilayer Perceptron (MLP) layers, of which the dimensions vary across different tasks due to different input/output structures. Message passing layers are commonly regarded as the key design for GNNs and the number of weight parameters can remain the same across tasks. In this light, we only consider the FIM with respect to the parameters in message passing layers so that the number of parameters considered stays the same for all datasets. We note that such formulation has its limitations, in the sense that it cannot cover all the GNN designs in the literature. We leave potential extensions with better coverage for future work. We further approximate the FIM by only considering the diagonal entries, which implicitly neglects the correlations between parameters. We note that this is common practice when analyzing the FIMs of deep neural networks, as the full FIM is massive (quadratic in the number of parameters) and infeasible to compute even on modern hardware. Similar to Pennington and Worah (2018), we consider the first two moments of FIM
m1 = 1
n tr[F ] and m2 =
1 n tr[F 2] (1)
and use α = m2/m21 as the scale-invariant representation. The computed α is lower bounded by 1 and captures how concentrated the spectrum is. A small α indicates the loss landscape is flat, and its corresponding model design enjoys fast first-order optimization and potentially better generalization. To encode label space information into each task, we propose to train only the last linear layer of each model on a given task, which can be done efficiently. The parameters in other layers are frozen after being randomly initialized. We take the average over R initializations to estimate the average ᾱ.
Constructing Task Features. We denote task features as measures extracted from each task that characterize its important traits. The design of task features should reflect our final objective: to use these features to identify similar tasks and transfer the best design distributions. Thus, we select U model designs as anchor models and concatenate the scale-invariant representations āu of each design as task features. To retain only the relative ranking among anchor model designs, we normalize the concatenated feature vector to a scale of 1. We let zf denote the normalized task feature.
4.3 FROM TASK FEATURES TO TASK EMBEDDINGS
The task feature zf introduced above can be regarded as a means of feature engineering. We construct the feature vector with domain knowledge, but there is no guarantee it functions as anticipated. We thus propose to learn a projection function g(·) : RU → RD that maps task feature zf to final task
embedding ze = g(zf ). We do not have any pointwise supervision that can be used as the training objective. Instead, we consider the metric space defined by GraphGym. The distance function in GraphGym - computed using the Kendall rank correlation between performance rankings of anchor models trained on the two compared tasks - correlates nicely with our desired knowledge transfer objective. It is not meaningful to enforce that task embeddings mimic GraphGym’s exact metric space, as GraphGym’s metric space can still contain noise, or does not fully align with the transfer objective. We consider a surrogate loss that enforces only the rank order among tasks. To illustrate, let us consider tasks T (i), T (j), T (k) and their corresponding task embeddings, z(i)e , z (j) e , z (k) e . Note
that ze is normalized to 1 so z (i) e ⊤ z (j) e measures the cosine similarity between tasks T (i) and T (j). Let dg(·, ·) denote the distance estimated by GraphGym. We want to enforce
z(i)e ⊤ z(j)e > z (i) e ⊤ z(k)e if dg(T (i), T (j)) < dg(T (i), T (k)).
To achieve this, we use the margin ranking loss as our surrogate supervised objective function:
Lr(z(i)e , z(j)e , z(k)e , y) = max(0,−y · (z(i)e ⊤ z(j)e − z(i)e ⊤ z(k)e ) + margin). (2)
Here if dg(T (i), T (j)) < dg(T (i), T (k)), then we have its corresponding label y = 1, and y = −1 otherwise. Our final task embedding space is then a FIM-based metric space with cosine distance function, where the distance is defined as de(T (i), T (j)) = 1− z(i)e ⊤ z (j) e . Please refer to the detailed training pipeline at Algorithm 2 in the Appendix.
4.4 AUTOML SEARCH ALGORITHM WITH TASK EMBEDDINGS
To transfer knowledge to a novel task, a naïve idea would be to directly carry over the best model configuration from the closest task in the bank. However, even a high Kendall rank correlation between model performance rankings of two tasks T (i), T (j) does not guarantee the best model configuration in task T (i) will also achieve the best performance on task T (j). In addition, since task similarities are subject to noise, this naïve solution may struggle when there exist multiple reference tasks that are all highly similar.
To make the knowledge transfer more robust to such failure cases, we introduce the notion of design distributions that depend on top performing model designs and propose to transfer design distributions rather than the best design configurations. Formally, consider a task T (i) in the taskmodel bank B, associated with its trials {(M (i)j , H (i) j , l (i) j )}. We can summarize its designs as a list of configurations C = {c1, . . . , cW }, such that all potential combinations of model architectures M and hyperparameters H fall under the Cartesian product of the configurations. For example, c1 could be the instantiation of aggregation layers, and c2 could be the start learning rate. We then define design distributions as random variables c1, c2, . . . , cW , each corresponding to a hyperparameter. Each random variable c is defined as the frequency distribution of the design choices used in the top K trials. We multiply all distributions for the individual configurations {c1, . . . , cW } to approximate the overall task’s design distribution P(C|T (i)) = ∏ w P(cw|T (i)).
During inference, given a novel task T (n), we select a close task subset S by thresholding task embedding distances, i.e., S = {T (i)|de(T (n), T (i)) ≤ dthres}. We then derive the transferred design prior Pt(C|T (n)) of the novel task by weighting design distributions from the close task subset S.
Pt(C|T (n)) = ∑ T (i)∈S 1 de(T (n),T (i)) P(C|T (i))∑
T (i)∈S 1
de(T (n),T (i))
. (3)
The inferred design prior for the novel task can then be used to guide various search algorithms. The most natural choice for a few trial regime is random search. Rather than sampling each design configuration following a uniform distribution, we propose to sample from the task-informed design prior Pt(C|T (n)). Please refer to Appendix A to check how we augment other search algorithms. For AUTOTRANSFER, we can preprocess the task-model bank B into Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))} as our pipeline only requires using task embedding z(i)e and design distribution P(C|T (i)) rather than detailed training trials. A detailed search pipeline is summarized in Algorithm 1.
Algorithm 1 Summary of AUTOTRANSFER search pipeline
Require: A processed task-model bank Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))}, a novel task T (n) = ( D(n),L(n)(·) ) , U anchor models M1, ...,MU , R specifies the number of repeats.
1: for u = 1 to U do 2: for r = 1 to R do 3: Initialize weights θ for anchor model Mu randomly 4: Estimate FIM F ←− ED[∇θ logP(Mu, y, θ)∇θ logP(Mu, y, θ)⊤] 5: Extract scale-invariant representation a(v)u ←− m2/m21 following Eq. 1 6: end for 7: āu ←− mean(a(1)u , a(2)u , ..., a(V )u ) 8: end for 9: z(n)f ←− concat(ā1, ā2, ..., āU )
10: z(n)e ←− g(z(n)f ) 11: Select close task subset S ←− {T (i)|1− z(n)e ⊤ z (i) e ≤ dthres} 12: Get design prior Pt(C|T (n)) by aggregating subset S following Eq. 3 13: Start a HPO search algorithm with the task-informed design prior Pt(C|T (n))
5 EXPERIMENTS
5.1 EXPERIMENTAL SETUP
Task-Model Bank: GNN-BANK-101.
To facilitate AutoML research with knowledge transfer, we collected GNN-BANK-101 as the first large-scale graph database that records reproducible design configurations and detailed training performance on a variety of tasks. Specifically, GNN-BANK-101 currently includes six tasks for node classification (AmazonComputers (Shchur et al., 2018), AmazonPhoto (Shchur et al., 2018), CiteSeer (Yang et al., 2016), CoauthorCS (Shchur et al., 2018), CoauthorPhysics (Shchur et al., 2018), Cora (Yang et al., 2016)) and six tasks for graph classification (PROTEINS (Ivanov et al., 2019), BZR (Ivanov et al., 2019), COX2 (Ivanov et al., 2019), DD (Ivanov et al., 2019), ENZYMES (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019)). Our design space follows (You et al., 2020), and we extend the design space to include various commonly adopted graph convolution and activation layers. We extensively run 10,000 different models for each task, leading to 120,000 total task-model combinations, and record all training information including train/val/test loss.
Benchmark Datasets. We benchmark AUTOTRANSFER on six different datasets following prior work (Qin et al., 2021). Our datasets include three standard node classification datasets (CoauthorPhysics (Shchur et al., 2018), CoraFull (Bojchevski and Günnemann, 2017) and OGB-Arxiv (Hu et al., 2020)), as well as three standard benchmark graph classification datasets, (COX2 (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019) and PROTEINS (Ivanov et al., 2019)). CoauthorPhysics and CoraFull are transductive node classification datasets, so we randomly assign nodes into train/valid/test sets following a 50%:25%:25% split (Qin et al., 2021). We randomly split graphs following a 80%:10%:10% split for the three graph classification datasets (Qin et al., 2021). We follow the default train/valid/test split for the OGB-Arxiv dataset (Hu et al., 2020). To make sure there is no information leakage, we temporarily remove all records related to the task from our task-model bank if the dataset we benchmark was collected in the task-model bank.
Baselines. We compare our methods with the state-of-the-art approaches for GNN AutoML. We use GCN and GAT with default architectures following their original implementation as baselines. For multi-trial NAS methods, we consider GraphNAS (Gao et al., 2020). For one-shot NAS methods, we include DARTS (Liu et al., 2018) and GASSO (Qin et al., 2021). GASSO is designed for transductive settings, so we omit it for graph classification benchmarks. We further provide results of HPO algorithms based on our proposed search space as baselines: Random, Evolution, TPE (Bergstra et al., 2011) and HyperBand (Li et al., 2017).
We by default allow searching 30 trials maximum for all the algorithms, i.e., an algorithm can train 30 different models and collect the model with the best accuracy. We use the default setting for one-shot
NAS algorithms (DARTS and GASSO), as they only train a super-model once and can efficiently evaluate different architectures. We are mostly interested in studying the few-trial regime where most advanced search algorithms degrade to random search. Thus we additionally include a random search (3 trials) baseline where we pick the best model out of only 3 trials.
5.2 EXPERIMENTS ON SEARCH EFFICIENCY
We evaluate AUTOTRANSFER by reporting the average best test accuracy among all trials considered over ten runs of each algorithm in Table 1. The test accuracy collected for each trial is selected at the epoch with the best validation accuracy. By comparing results from random search (3 trials) and AUTOTRANSFER (3 trials), we show that our transferred task-informed design prior significantly improves test accuracy in the few-trial regime, and can be very useful in environments that are where computationally constrained. Even if we increase the number of search trials to 30, AUTOTRANSFER still demonstrates non-trivial improvement compared with TPE, indicating that our proposed pipeline has advantages even when computational resources are abundant. Notably, with only 3 search trials, AUTOTRANSFER surpasses most of the baselines, even those that use 30 trials.
To better understand the sample efficiency of AUTOTRANSFER, we plot the best test accuracy found at each trial in Figure 3 for OGB-Arxiv and TU-PROTEINS datasets. We notice that the advanced search algorithms (Evolution and TPE) have no advantages over random search at the few-trial regime since the amount of prior search data is not yet sufficient to infer potentially better design configurations. On the contrary, by sampling from the transferred design prior, AUTOTRANSFER achieves significantly better average test accuracy in the first few trials. The best test accuracy at trial 3 of AUTOTRANSFER surpasses its counterpart at trial 10 for every other method.
5.3 ANALYSIS OF TASK EMBEDDINGS
Qualitative analysis of task features. To examine the quality of the proposed task features, we visualize the proposed task similarity matrix (Figure 4 (b)) along with the task similarity matrix (Figure 4 (a)) proposed in GraphGym. We show that our proposed task similarity matrix captures similar patterns as GraphGym’s task similarity matrix while being computed much more efficiently
2Results come from the original paper (Qin et al., 2021).
by omitting training. We notice that the same type of tasks, i.e., node classification and graph classification, share more similarities within each group. As a sanity check, we examined that the closest task in the bank with respect to CoraFull is Cora. The top 3 closest tasks for OGB-Arxiv are AmazonComputers, AmazonPhoto, and CoauthorPhysics, all of which are node classification tasks.
Generalization of projection function g(·). To show the proposed projection function g(·) can generate task embeddings that can generalize to novel tasks, we conduct leave-one-out cross validation with all tasks in our task-model bank. Concretely, for each task considered as a novel task T (n), we use the rest of the tasks, along with their distance metric dg(·, ·) estimated by GraphGym’s exact but computationally expensive metric space, to train the projection function g(·). We calculate Kendall rank correlation over task similarities for Task Feature (without g(·)) and Task Embedding (with g(·)) against the exact task similarities. The average rank correlation and the standard deviation over ten runs is shown on Figure 4 (c). We find that with the proposed g(·), our task embeddings indeed correlate better with the exact task similarities, and therefore, generalize better to novel tasks.
Ablation study on alternative task space design. To demonstrate the superiority of the proposed task embedding, we further compare it with alternative task features. Following prior work (Yang et al., 2019), we use the normalized losses over the first 10 steps as the task feature. The results on OGB-Arxiv are shown in Table 2. Compared to AUTOTRANSFER’s task embedding, the task feature induced by normalized losses has a lower ranking correlation with the exact metric and yields worse performance. Table 2 further justifies the efficacy of using the Kendall rank correlation as the metric for task embedding quality, as higher Kendall rank correlation leads to better performance.
6 CONCLUSION
In this paper, we study how to improve AutoML search efficiency by transferring existing architectural design knowledge to novel tasks of interest. We introduce a task-model bank that captures the performance over a diverse set of GNN architectures and tasks. We also introduce a computationally efficient task embedding that can accurately measure the similarity between different tasks. We release GNN-BANK-101, a large-scale database that records detailed GNN training information of 120,000 task-model combinations. We hope this work can facilitate and inspire future research in efficient AutoML to make deep learning more accessible to a general audience.
ACKNOWLEDGEMENTS
We thank Xiang Lisa Li, Hongyu Ren, Yingxin Wu for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-10171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
A ADDITIONAL IMPLEMENTATION DETAILS
Runtime analysis. We have empirically shown that AUTOTRANSFER can significantly improve search efficiency by reducing the number of trials needed to achieve reasonably good accuracy. The only overhead we introduced is the procedure of estimating task embeddings. Since we use a randomly initialized architecture, extracting each task feature only requires at most one forward and a few backward passes from a single minibatch of data. The wall-clock time depends on the size of the network and data structure. In our experiments, it typically takes a few seconds on an NVIDIA T4 GPU. We repeat the task feature extraction process 5 times for each anchor model, and for a total of 12 anchor models. Thus, the wall-clock time of the overhead for computing task embedding of novel task is within a few minutes. We note the length of this process is generally comparable to one trial of training on a small-sized dataset, and the time saved is much more significant for large-scale datasets.
Details for task-model-bank training. Our GNN model specifications are summarized in Table 3. Our code base was developed based on GraphGym (You et al., 2020). For all the training trials, We use the Adam optimizer and cosine learning rate scheduler (annealed to 0, no restarting). We use L2 regularization with a weight decay of 5e-4. We record losses and accuracies for training, validation and test splits every 20 epochs.
Training details for AUTOTRANSFER. We summarize the training procedure for the projection function g(·) in Algorithm 2. We set U = 12 and R = 5 throughout the paper. We use the same set of anchor model designs as those in GraphGym. We use a two-layer MLP with a hidden dimension of 16 to parameterize the projection function g(·). We use the Adam optimizer with a learning rate of 5e-3. We use margin = 0.1 and train the network for 1000 iterations with a batch size of 128. We adopt K = 16 when selecting the top K trials that are used to summarize design distributions.
Details for adapting TPE, evolution algorithms. We hereby illustrate how to compose the search algorithms with the transferred design priors. TPE is a Bayesian hyperparameter optimization method, meaning that it is initialized with a prior distribution to model the search space and updates the prior as it evaluates hyperparemeter configurations and records their performance. We replace this prior distribution with the task-informed design priors. As evolutionary algorithms generally initialize a large population and iteratively prune and mutate existing networks, we replace the random network initialization with the task-informed design priors. As we mainly focus on the few trial search regime, we set the fully random warm-up trials to 5 for both TPE and evolution algorithms.
B ADDITIONAL DISCUSSION
Limitations. In principle, AUTOTRANSFER leverages the correlation between model performance rankings among tasks to efficiently construct model priors. Thus, it is less effective if the novel task has large task distances with respect to all tasks in the task-model bank. In practice, users can continuously add additional search trials to the bank. As the bank size grows, it will be less likely that a novel task has a low correlation with all tasks in the bank.
Algorithm 2 Training Pipeline for the projection function g(·)
Require: Task features {z(i)f |T (i)} extracted for each task from the task-model bank. Distance measure dg(·, ·) estimated in GraphGym.
1: for each iteration do 2: Sample T (i), T (j), T (k) 3: z(i)e , z (j) e , z (k) e ←− g(z(i)f ), g(z (j) f ), g(z (k) f ) 4: y ←− 1 if dg(T (i), T (j)) < dg(T (i), T (k)) else −1 5: Optimize objective function Lr(z(i)e , z(j)e , z(k)e , y) in Eq. 2 6: end for
Social Impact. Our long-term goal is to provide a seamless GNN infrastructure that simplifies the training and deployment of ML models on structured data. Efficient and robust AutoML algorithms are crucial to making deep learning more accessible to people who are interested but lack deep learning expertise as well as those who lack the massive computational budget AutoML traditionally requires. We believe this paper is an important step to provide AI tools to a broader population and thus allow for AI to help enhance human productivity. The datasets we used for experiments are among the most widely-used benchmarks, which should not contain any undesirable bias. Besides, training/test losses and accuracies are highly summarized statistics that we believe should not incur potential privacy issues.
C ADDITIONAL RESULTS
Search efficiency. We summarize the average number of trials needed to surpass the average best accuracy found by TPE with 30 trials in Table 4. We show that AUTOTRANSFER reduces the number of explored architectures by an order of magnitude.
Ablation study on number of anchor models and task embedding design. We empirically demonstrate how the number of anchor models affect the rank correlation in Table 5. While 3 anchor models are not enough to capture the task distance, we found that 9 and 12 have a satisfactory trade-off between capturing task distance and computational efficiency. Furthermore, we empirically in Table 6 demonstrate that the learned task embedding space is superior to the proposed task feature space in terms of correlation as well as final search performance.
Visualizing model designs. We visualized the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset in Figure 5, as well as Coauthor-Physics dataset in Figure 6. We could observe that the transferred design distributions have a positive correlation on most of the design choices.
Figure 5: We plot the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset.
1 2 0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.layers_pre_mp Transferred Design Prior Ground Truth Prior
2 4 6 8 0.0
0.1
0.2
0.3
0.4
gnn.layers_mp Transferred Design Prior Ground Truth Prior
2 3 0.0
0.2
0.4
0.6
0.8
gnn.layers_post_mp Transferred Design Prior Ground Truth Prior
ski pco
nca t
ski psu
m sta
ck 0.0
0.2
0.4
0.6
0.8
1.0 gnn.stage_type Transferred Design Prior Ground Truth Prior
ad d
ma x
me an
0.0
0.1
0.2
0.3
0.4
gnn.agg Transferred Design Prior Ground Truth Prior
25 6 64
0.0
0.2
0.4
0.6
0.8
gnn.dim_inner Transferred Design Prior Ground Truth Prior
ga tco
nv
ga tco
nv 2h
ea d
ga tco
nv 4h
ea d
ge ne
ral con
v
gin con
v
sag eco
nv
gcn con
v 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
gnn.layer_type Transferred Design Prior Ground Truth Prior
elu
lre lu_
01 pre lu rel u
0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.act Transferred Design Prior Ground Truth Prior
0.0 01 0.0
1 0.0
0.1
0.2
0.3
0.4
0.5
0.6
optim.base_lr Transferred Design Prior Ground Truth Prior
16 00 20 0 80
0 0.0
0.1
0.2
0.3
0.4
0.5 optim.max_epoch Transferred Design Prior Ground Truth Prior
Figure 6: We plot the transferred design distributions and ground truth design distributions on the Coauthor-Physics dataset. | 1. What is the focus and contribution of the paper on AutoML for graph learning tasks?
2. What are the strengths of the proposed approach, particularly in terms of task embedding and efficiency?
3. What are the weaknesses of the paper regarding the selection of anchor models and the limitation of graph architectures?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a novel AutoML method, AutoTransfer, for graph learning tasks. It introduces a task-model bank that can transfer the model design to similar new tasks. The experiments indicate that the method is efficient and improves the performance of the found models.
Strengths And Weaknesses
Strengths
The paper introduces an interesting method for transferring knowledge from a previously searched task to a new coming task with a task-model bank design.
It proposes an interesting task embedding with several random-initialized anchor models, and the experiments show that the task similarity is promising.
The methods are tested on multiple benchmarks and are proven to be efficient and performant.
It releases a large task-model dataset that can inspire future research.
Weaknesses
How are the anchor models selected? I failed to find the architecture design of the anchor models, and it would be good to see how diverse the anchor models should be.
The graph architectures used in the paper are quite limited. For example, on the OGB leaderboard, many different GNN (non-GNN) algorithms are applied and achieve significantly better results. It would be interesting to have some of those models in the search space.
It would be interesting to see some examples of the models selected for a group of tasks, to get some insights of the model design.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and easy to follow. The proposed methods are interesting. |
ICLR | Title
AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks
Abstract
AutoML has demonstrated remarkable success in finding an effective neural architecture for a given machine learning task defined by a specific dataset and an evaluation metric. However, most present AutoML techniques consider each task independently from scratch, which requires exploring many architectures, leading to high computational costs. Here we propose AUTOTRANSFER, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest. Our key innovation includes a task-model bank that captures the model performance over a diverse set of GNN architectures and tasks, and a computationally efficient task embedding that can accurately measure the similarity among different tasks. Based on the task-model bank and the task embeddings, we estimate the design priors of desirable models of the novel task, by aggregating a similarity-weighted sum of the top-K design distributions on tasks that are similar to the task of interest. The computed design priors can be used with any AutoML search algorithm. We evaluate AUTOTRANSFER on six datasets in the graph machine learning domain. Experiments demonstrate that (i) our proposed task embedding can be computed efficiently, and that tasks with similar embeddings have similar best-performing architectures; (ii) AUTOTRANSFER significantly improves search efficiency with the transferred design priors, reducing the number of explored architectures by an order of magnitude. Finally, we release GNN-BANK-101, a large-scale dataset of detailed GNN training information of 120,000 task-model combinations to facilitate and inspire future research.
1 INTRODUCTION
Deep neural networks are highly modular, requiring many design decisions to be made regarding network architecture and hyperparameters. These design decisions form a search space that is nonconvex and costly even for experts to optimize over, especially when the optimization must be repeated from scratch for each new use case. Automated machine learning (AutoML) is an active research area that aims to reduce the human effort required for architecture design that usually covers hyperparameter optimization and neural architecture search. AutoML has demonstrated success (Zoph and Le, 2016; Pham et al., 2018; Zoph et al., 2018; Cai et al., 2018; He et al., 2018; Guo et al., 2020; Erickson et al., 2020; LeDell and Poirier, 2020) in many application domains.
Finding a reasonably good model for a new learning task1 in a computationally efficient manner is crucial for making deep learning accessible to domain experts with diverse backgrounds. Efficient AutoML is especially important in domains where the best architectures/hyperparameters are highly sensitive to the task. A notable example is the domain of graph learning2. First, graph learning methods receive input data composed of a variety of data types and optimize over tasks that span an equally diverse set of domains and modalities such as recommendation (Ying et al., 2018; He et al., 2020), physical simulation (Sanchez-Gonzalez et al., 2020; Pfaff et al., 2020), and bioinformatics (Zitnik et al., 2018). This differs from computer vision and natural language processing where the
1In this paper, we refer to a task as a given dataset with an evaluation metric/loss, e.g., cross-entropy loss on node classification on the Cora dataset.
2We focus on the graph learning domain in this paper. AUTOTRANSFER can be generalized to other domains.
input data has a predefined, fixed structure that can be shared across different neural architectures. Second, neural networks that operate on graphs come with a rich set of design choices and a large set of parameters to explore. However, unlike other domains where a few pre-trained architectures such as ResNet (He et al., 2016) and GPT-3 (Brown et al., 2020) dominate the benchmarks, it has been shown that the best graph neural network (GNN) design is highly task-dependent (You et al., 2020).
Although AutoML as a research domain is evolving fast, existing AutoML solutions have massive computational overhead when finding a good model for a new learning task is the goal. Most present AutoML techniques consider each task independently and in isolation, therefore they require redoing the search from scratch for each new task. This approach ignores the potentially valuable architectural design knowledge obtained from the previous tasks, and inevitably leads to a high computational cost. The issue is especially significant in the graph learning domain Gao et al. (2019); Zhou et al. (2019), due to the challenges of diverse task types and the huge design space that are discussed above.
Here we propose AUTOTRANSFER3, an AutoML solution that drastically improves AutoML architecture search by transferring previous architectural design knowledge to the task of interest. Our key innovation is to introduce a task-model bank that stores the performance of a diverse set of GNN architectures and tasks to guide the search algorithm. To enable knowledge transfer, we define a task embedding space such that tasks close in the embedding space have similar corresponding top-performing architectures. The challenge here is that the task embedding needs to capture the performance rankings of different architectures on different datasets, while being efficient to compute. Our innovation here is to embed a task by using the condition number of its Fisher Information Matrix of various randomly initialized models and also a learning scheme with an empirical generalization guarantee. This way we implicitly capture the properties of the learning task, while being orders of magnitudes faster (within seconds). We then estimate the design prior of desirable models for the new task, by aggregating design distributions on tasks that are close to the task of interest. Finally, we initiate a hyperparameter search algorithm with the task-informed design prior computed.
We evaluate AUTOTRANSFER on six datasets, including both node classification and graph classification tasks. We show that our proposed task embeddings can be computed efficiently and the distance measured between tasks correlates highly (0.43 Kendall correlation) with model performance rankings. Furthermore, we present AUTOTRANSFER significantly improves search efficiency when using the transferred design prior. AUTOTRANSFER reduces the number of explored architectures needed to reach a target accuracy by an order of magnitude compared to SOTA. Finally, we release GNN-BANK-101—the first large-scale database containing detailed performance records for 120,000 task-model combinations which were trained with 16,128 GPU hours—to facilitate future research.
2 RELATED WORK
In this section, we summarize the related work on AutoML regarding its applications on GNNs, the common search algorithms, and pioneering work regarding transfer learning and task embeddings.
AutoML for GNNs. Neural architecture search (NAS), a unique and popular form of AutoML for deep learning, can be divided into two categories: multi-trial NAS and one-shot NAS. During multi-trial NAS, each sampled architecture is trained separately. GraphNAS (Gao et al., 2020) and Auto-GNN (Zhou et al., 2019) are typical multi-trial NAS algorithms on GNNs which adopt an RNN controller that learns to suggest better sets of configurations through reinforcement learning. One-shot NAS (e.g., (Liu et al., 2018; Qin et al., 2021; Li et al., 2021)) involves encapsulating the entire model space in one super-model, training the super-model once, and then iteratively sampling sub-models from the super-model to find the best one. In addition, there is work that explicitly studies fine-grained design choices such as data augmentation (You et al., 2021), message passing layer type (Cai et al., 2021; Ding et al., 2021; Zhao et al., 2021), and graph pooling (Wei et al., 2021). Notably, AUTOTRANSFER is the first AutoML solution for GNNs that efficiently transfer design knowledge across tasks.
HPO Algorithms. Hyperparameter Optimization (HPO) algorithms search for the optimal model hyperparameters by iteratively suggesting a set of hyperparameters and evaluating their performance. Random search samples hyperparameters from the search space with equal probability. Despite not
3Source code is available at https://github.com/snap-stanford/AutoTransfer.
learning from previous trials, random search is commonly used for its simplicity and is much more efficient than grid search (Bergstra and Bengio, 2012). The TPE algorithm (Bergstra et al., 2011) builds a probabilistic model of task performance over the hyperparameter space and uses the results of past trials to choose the most promising next configuration to train, which the TPE algorithm defines as maximizing the Expected Improvement value (Jones, 2001). Evolutionary algorithms (Real et al., 2017; Jaderberg et al., 2017) train multiple models in parallel and replace poorly performing models with “mutated” copies of the current best models. AUTOTRANSFER is a general AutoML solution and can be applied in combination with any of these HPO algorithms.
Transfer Learning in AutoML. Wong et al. (2018) proposed to transfer knowledge across tasks by reloading the controller of reinforcement learning search algorithms. However, this method assumes that the search space on different tasks starts with the same learned prior. Unlike AUTOTRANSFER, it cannot address the core challenge in GNN AutoML: the best GNN design is highly task-specific. GraphGym (You et al., 2020) attempts to transfer the best architecture design directly with a metric space that measures task similarity. GraphGym (You et al., 2020) computes task similarity by training a set of 12 ”anchor models” to convergence which is computationally expensive. In contrast, AUTOTRANSFER designs light-weight task embeddings requiring minimal computations overhead. Additionally, Zhao and Bilen (2021); Li et al. (2021) proposes to conduct architecture search on a proxy subset of the whole dataset and later transfer the best searched architecture on the full dataset. Jeong et al. (2021) studies a similar setting in vision domain.
Task Embedding. There is prior research trying to quantify task embeddings and similarities. Similar to GraphGym, Taskonomy (Zamir et al., 2018) estimates the task affinity matrix by summarizing final losses/evaluation metrics using an Analytic Hierarchy Process (Saaty, 1987). From a different perspective, Task2Vec (Achille et al., 2019) generates task embeddings for a given task using the Fisher Information Matrix associated with a pre-trained probe network. This probe network is shared across tasks and allows Task2Vec to estimate the Fisher Information Matrix of different image datasets. Le et al. (2022) extends a similar idea to neural architecture search. The aforementioned task embeddings cannot be directly applied to GNNs as the inputs do not align across datasets. AUTOTRANSFER avoids the bottleneck by using asymptotic statistics of the Fisher Information Matrix with randomly initiated weights.
3 PROBLEM FORMULATION AND PRELIMINARIES
We first introduce formal definitions of data structures relevant to AUTOTRANSFER.
Definition 1 (Task) We denote a task as T = (D,L(·)), consisting of a datasetD and a loss function L(·) related to the evaluation metric.
For each training attempt on a task T (i), we can record its model architecture Mj , hyperparameters Hj , and corresponding value of loss lj , i.e., (Mj , Hj , lj). We propose to maintain a task-model bank to facilitate knowledge transfer to future novel tasks.
Definition 2 (Task-Model Bank) A task-model bank B is defined as a collection of tasks, each with multiple training attempts, in the form of B = {(T (i), {(M (i)j , H (i) j , l (i) j )})}.
AutoML with Knowledge Transfer. Suppose we have a task-model bank B. Given a novel task T (n) which has not been seen before, our goal is to quickly find a model that works reasonably well on the novel task by utilizing knowledge from the task-model bank.
In this paper, we focus on AutoML for graph learning tasks, though our developed technique is general and can be applied to other domains. We define the input graph as G = {V,E}, where V is the node set and E ⊆ V × V is the edge set. Furthermore, let y denote its output labels, which can be node-level, edge-level or graph-level. A GNN parameterized by weights θ outputs a posterior distribution P(G, y, θ) for label predictions.
4 PROPOSED SOLUTION: AUTOTRANSFER
In this section, we introduce the proposed AUTOTRANSFER solution. AUTOTRANSFER uses the task embedding space as a tool to understand the relevance of previous architectural designs to the target task. The designed task embedding captures the performance rankings of different architectures on different tasks while also being efficient to compute. We first introduce a theoretically motivated solution to extract a scale-invariant performance representation of each task-model pair. We use these representations to construct task features and further learn task embeddings. These embeddings form the task embedding space that we finally use during the AutoML search.
4.1 BASICS OF THE FISHER INFORMATION MATRIX (FIM)
Given a GNN defined above, its Fisher Information Matrix (FIM) F is defined as
F = EG,y[∇θ logP(G, y, θ)∇θ logP(G, y, θ)⊤].
which formally is the expected covariance of the scores with respect to the model parameters. There are two popular geometric views for the FIM. First, the FIM is an upper bound of the Hessian and coincides with the Hessian if the gradient is 0. Thus, the FIM characterizes the local landscape of the loss function near the global minimum. Second, similar to the Hessian, the FIM models the loss
landscape with respect not to the input space, but to the parameter space. In the information geometry view, if we add a small perturbation to the parameter space, we have
KL(P(G, y, θ)∥P(G, y, θ + dθ)) = dθ⊤Fdθ.
where KL(·, ·) stands for Kullback–Leibler divergence. It means that the parameter space of a model forms a Riemannian manifold and the FIM works as its Riemannian metric. The FIM thus allows us to quantify the importance of a model’s weights in a way that is applicable to different architectures.
4.2 FIM-BASED TASK FEATURES
Scale-invariant Representation of Task-Model Pairs. We aim to find a scale-invariant representation for each task-model pair which will form the basis for constructing task features. The major challenge in using the FIM to represent GNN performance is that graph datasets do not have a universal, fixed input structure, so it is infeasible to find a single pre-trained model and extract its FIM. However, training multiple networks poses a problem as the FIMs computed for different networks are not directly comparable. We choose to use multiple networks but additionally propose to use asymptotic statistics of the FIM associated with randomly initialized weights. The theoretical justification for the relationship between the asymptotic statistics of the FIM and the trainability of neural networks was studied in (Karakida et al., 2019; Pennington and Worah, 2018) to which we refer the readers. We hypothesize that such a measure of trainability encodes loss landscapes and generalization ability and thus correlates with final model performance on the task. Another issue that relates to input structures of graph datasets is that different models have different number of parameters. Despite some specially designed architectures, e.g., (Lee et al., 2019; Ma et al., 2019), most GNN architecture design can be represented as a sequence of pre-processing layers, message passing layers, and post-processing layers. Pre-process layers and post-process layers are Multilayer Perceptron (MLP) layers, of which the dimensions vary across different tasks due to different input/output structures. Message passing layers are commonly regarded as the key design for GNNs and the number of weight parameters can remain the same across tasks. In this light, we only consider the FIM with respect to the parameters in message passing layers so that the number of parameters considered stays the same for all datasets. We note that such formulation has its limitations, in the sense that it cannot cover all the GNN designs in the literature. We leave potential extensions with better coverage for future work. We further approximate the FIM by only considering the diagonal entries, which implicitly neglects the correlations between parameters. We note that this is common practice when analyzing the FIMs of deep neural networks, as the full FIM is massive (quadratic in the number of parameters) and infeasible to compute even on modern hardware. Similar to Pennington and Worah (2018), we consider the first two moments of FIM
m1 = 1
n tr[F ] and m2 =
1 n tr[F 2] (1)
and use α = m2/m21 as the scale-invariant representation. The computed α is lower bounded by 1 and captures how concentrated the spectrum is. A small α indicates the loss landscape is flat, and its corresponding model design enjoys fast first-order optimization and potentially better generalization. To encode label space information into each task, we propose to train only the last linear layer of each model on a given task, which can be done efficiently. The parameters in other layers are frozen after being randomly initialized. We take the average over R initializations to estimate the average ᾱ.
Constructing Task Features. We denote task features as measures extracted from each task that characterize its important traits. The design of task features should reflect our final objective: to use these features to identify similar tasks and transfer the best design distributions. Thus, we select U model designs as anchor models and concatenate the scale-invariant representations āu of each design as task features. To retain only the relative ranking among anchor model designs, we normalize the concatenated feature vector to a scale of 1. We let zf denote the normalized task feature.
4.3 FROM TASK FEATURES TO TASK EMBEDDINGS
The task feature zf introduced above can be regarded as a means of feature engineering. We construct the feature vector with domain knowledge, but there is no guarantee it functions as anticipated. We thus propose to learn a projection function g(·) : RU → RD that maps task feature zf to final task
embedding ze = g(zf ). We do not have any pointwise supervision that can be used as the training objective. Instead, we consider the metric space defined by GraphGym. The distance function in GraphGym - computed using the Kendall rank correlation between performance rankings of anchor models trained on the two compared tasks - correlates nicely with our desired knowledge transfer objective. It is not meaningful to enforce that task embeddings mimic GraphGym’s exact metric space, as GraphGym’s metric space can still contain noise, or does not fully align with the transfer objective. We consider a surrogate loss that enforces only the rank order among tasks. To illustrate, let us consider tasks T (i), T (j), T (k) and their corresponding task embeddings, z(i)e , z (j) e , z (k) e . Note
that ze is normalized to 1 so z (i) e ⊤ z (j) e measures the cosine similarity between tasks T (i) and T (j). Let dg(·, ·) denote the distance estimated by GraphGym. We want to enforce
z(i)e ⊤ z(j)e > z (i) e ⊤ z(k)e if dg(T (i), T (j)) < dg(T (i), T (k)).
To achieve this, we use the margin ranking loss as our surrogate supervised objective function:
Lr(z(i)e , z(j)e , z(k)e , y) = max(0,−y · (z(i)e ⊤ z(j)e − z(i)e ⊤ z(k)e ) + margin). (2)
Here if dg(T (i), T (j)) < dg(T (i), T (k)), then we have its corresponding label y = 1, and y = −1 otherwise. Our final task embedding space is then a FIM-based metric space with cosine distance function, where the distance is defined as de(T (i), T (j)) = 1− z(i)e ⊤ z (j) e . Please refer to the detailed training pipeline at Algorithm 2 in the Appendix.
4.4 AUTOML SEARCH ALGORITHM WITH TASK EMBEDDINGS
To transfer knowledge to a novel task, a naïve idea would be to directly carry over the best model configuration from the closest task in the bank. However, even a high Kendall rank correlation between model performance rankings of two tasks T (i), T (j) does not guarantee the best model configuration in task T (i) will also achieve the best performance on task T (j). In addition, since task similarities are subject to noise, this naïve solution may struggle when there exist multiple reference tasks that are all highly similar.
To make the knowledge transfer more robust to such failure cases, we introduce the notion of design distributions that depend on top performing model designs and propose to transfer design distributions rather than the best design configurations. Formally, consider a task T (i) in the taskmodel bank B, associated with its trials {(M (i)j , H (i) j , l (i) j )}. We can summarize its designs as a list of configurations C = {c1, . . . , cW }, such that all potential combinations of model architectures M and hyperparameters H fall under the Cartesian product of the configurations. For example, c1 could be the instantiation of aggregation layers, and c2 could be the start learning rate. We then define design distributions as random variables c1, c2, . . . , cW , each corresponding to a hyperparameter. Each random variable c is defined as the frequency distribution of the design choices used in the top K trials. We multiply all distributions for the individual configurations {c1, . . . , cW } to approximate the overall task’s design distribution P(C|T (i)) = ∏ w P(cw|T (i)).
During inference, given a novel task T (n), we select a close task subset S by thresholding task embedding distances, i.e., S = {T (i)|de(T (n), T (i)) ≤ dthres}. We then derive the transferred design prior Pt(C|T (n)) of the novel task by weighting design distributions from the close task subset S.
Pt(C|T (n)) = ∑ T (i)∈S 1 de(T (n),T (i)) P(C|T (i))∑
T (i)∈S 1
de(T (n),T (i))
. (3)
The inferred design prior for the novel task can then be used to guide various search algorithms. The most natural choice for a few trial regime is random search. Rather than sampling each design configuration following a uniform distribution, we propose to sample from the task-informed design prior Pt(C|T (n)). Please refer to Appendix A to check how we augment other search algorithms. For AUTOTRANSFER, we can preprocess the task-model bank B into Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))} as our pipeline only requires using task embedding z(i)e and design distribution P(C|T (i)) rather than detailed training trials. A detailed search pipeline is summarized in Algorithm 1.
Algorithm 1 Summary of AUTOTRANSFER search pipeline
Require: A processed task-model bank Bp = {(D(i),L(i)(·)), z(i)e ,P(C|T (i))}, a novel task T (n) = ( D(n),L(n)(·) ) , U anchor models M1, ...,MU , R specifies the number of repeats.
1: for u = 1 to U do 2: for r = 1 to R do 3: Initialize weights θ for anchor model Mu randomly 4: Estimate FIM F ←− ED[∇θ logP(Mu, y, θ)∇θ logP(Mu, y, θ)⊤] 5: Extract scale-invariant representation a(v)u ←− m2/m21 following Eq. 1 6: end for 7: āu ←− mean(a(1)u , a(2)u , ..., a(V )u ) 8: end for 9: z(n)f ←− concat(ā1, ā2, ..., āU )
10: z(n)e ←− g(z(n)f ) 11: Select close task subset S ←− {T (i)|1− z(n)e ⊤ z (i) e ≤ dthres} 12: Get design prior Pt(C|T (n)) by aggregating subset S following Eq. 3 13: Start a HPO search algorithm with the task-informed design prior Pt(C|T (n))
5 EXPERIMENTS
5.1 EXPERIMENTAL SETUP
Task-Model Bank: GNN-BANK-101.
To facilitate AutoML research with knowledge transfer, we collected GNN-BANK-101 as the first large-scale graph database that records reproducible design configurations and detailed training performance on a variety of tasks. Specifically, GNN-BANK-101 currently includes six tasks for node classification (AmazonComputers (Shchur et al., 2018), AmazonPhoto (Shchur et al., 2018), CiteSeer (Yang et al., 2016), CoauthorCS (Shchur et al., 2018), CoauthorPhysics (Shchur et al., 2018), Cora (Yang et al., 2016)) and six tasks for graph classification (PROTEINS (Ivanov et al., 2019), BZR (Ivanov et al., 2019), COX2 (Ivanov et al., 2019), DD (Ivanov et al., 2019), ENZYMES (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019)). Our design space follows (You et al., 2020), and we extend the design space to include various commonly adopted graph convolution and activation layers. We extensively run 10,000 different models for each task, leading to 120,000 total task-model combinations, and record all training information including train/val/test loss.
Benchmark Datasets. We benchmark AUTOTRANSFER on six different datasets following prior work (Qin et al., 2021). Our datasets include three standard node classification datasets (CoauthorPhysics (Shchur et al., 2018), CoraFull (Bojchevski and Günnemann, 2017) and OGB-Arxiv (Hu et al., 2020)), as well as three standard benchmark graph classification datasets, (COX2 (Ivanov et al., 2019), IMDB-M (Ivanov et al., 2019) and PROTEINS (Ivanov et al., 2019)). CoauthorPhysics and CoraFull are transductive node classification datasets, so we randomly assign nodes into train/valid/test sets following a 50%:25%:25% split (Qin et al., 2021). We randomly split graphs following a 80%:10%:10% split for the three graph classification datasets (Qin et al., 2021). We follow the default train/valid/test split for the OGB-Arxiv dataset (Hu et al., 2020). To make sure there is no information leakage, we temporarily remove all records related to the task from our task-model bank if the dataset we benchmark was collected in the task-model bank.
Baselines. We compare our methods with the state-of-the-art approaches for GNN AutoML. We use GCN and GAT with default architectures following their original implementation as baselines. For multi-trial NAS methods, we consider GraphNAS (Gao et al., 2020). For one-shot NAS methods, we include DARTS (Liu et al., 2018) and GASSO (Qin et al., 2021). GASSO is designed for transductive settings, so we omit it for graph classification benchmarks. We further provide results of HPO algorithms based on our proposed search space as baselines: Random, Evolution, TPE (Bergstra et al., 2011) and HyperBand (Li et al., 2017).
We by default allow searching 30 trials maximum for all the algorithms, i.e., an algorithm can train 30 different models and collect the model with the best accuracy. We use the default setting for one-shot
NAS algorithms (DARTS and GASSO), as they only train a super-model once and can efficiently evaluate different architectures. We are mostly interested in studying the few-trial regime where most advanced search algorithms degrade to random search. Thus we additionally include a random search (3 trials) baseline where we pick the best model out of only 3 trials.
5.2 EXPERIMENTS ON SEARCH EFFICIENCY
We evaluate AUTOTRANSFER by reporting the average best test accuracy among all trials considered over ten runs of each algorithm in Table 1. The test accuracy collected for each trial is selected at the epoch with the best validation accuracy. By comparing results from random search (3 trials) and AUTOTRANSFER (3 trials), we show that our transferred task-informed design prior significantly improves test accuracy in the few-trial regime, and can be very useful in environments that are where computationally constrained. Even if we increase the number of search trials to 30, AUTOTRANSFER still demonstrates non-trivial improvement compared with TPE, indicating that our proposed pipeline has advantages even when computational resources are abundant. Notably, with only 3 search trials, AUTOTRANSFER surpasses most of the baselines, even those that use 30 trials.
To better understand the sample efficiency of AUTOTRANSFER, we plot the best test accuracy found at each trial in Figure 3 for OGB-Arxiv and TU-PROTEINS datasets. We notice that the advanced search algorithms (Evolution and TPE) have no advantages over random search at the few-trial regime since the amount of prior search data is not yet sufficient to infer potentially better design configurations. On the contrary, by sampling from the transferred design prior, AUTOTRANSFER achieves significantly better average test accuracy in the first few trials. The best test accuracy at trial 3 of AUTOTRANSFER surpasses its counterpart at trial 10 for every other method.
5.3 ANALYSIS OF TASK EMBEDDINGS
Qualitative analysis of task features. To examine the quality of the proposed task features, we visualize the proposed task similarity matrix (Figure 4 (b)) along with the task similarity matrix (Figure 4 (a)) proposed in GraphGym. We show that our proposed task similarity matrix captures similar patterns as GraphGym’s task similarity matrix while being computed much more efficiently
2Results come from the original paper (Qin et al., 2021).
by omitting training. We notice that the same type of tasks, i.e., node classification and graph classification, share more similarities within each group. As a sanity check, we examined that the closest task in the bank with respect to CoraFull is Cora. The top 3 closest tasks for OGB-Arxiv are AmazonComputers, AmazonPhoto, and CoauthorPhysics, all of which are node classification tasks.
Generalization of projection function g(·). To show the proposed projection function g(·) can generate task embeddings that can generalize to novel tasks, we conduct leave-one-out cross validation with all tasks in our task-model bank. Concretely, for each task considered as a novel task T (n), we use the rest of the tasks, along with their distance metric dg(·, ·) estimated by GraphGym’s exact but computationally expensive metric space, to train the projection function g(·). We calculate Kendall rank correlation over task similarities for Task Feature (without g(·)) and Task Embedding (with g(·)) against the exact task similarities. The average rank correlation and the standard deviation over ten runs is shown on Figure 4 (c). We find that with the proposed g(·), our task embeddings indeed correlate better with the exact task similarities, and therefore, generalize better to novel tasks.
Ablation study on alternative task space design. To demonstrate the superiority of the proposed task embedding, we further compare it with alternative task features. Following prior work (Yang et al., 2019), we use the normalized losses over the first 10 steps as the task feature. The results on OGB-Arxiv are shown in Table 2. Compared to AUTOTRANSFER’s task embedding, the task feature induced by normalized losses has a lower ranking correlation with the exact metric and yields worse performance. Table 2 further justifies the efficacy of using the Kendall rank correlation as the metric for task embedding quality, as higher Kendall rank correlation leads to better performance.
6 CONCLUSION
In this paper, we study how to improve AutoML search efficiency by transferring existing architectural design knowledge to novel tasks of interest. We introduce a task-model bank that captures the performance over a diverse set of GNN architectures and tasks. We also introduce a computationally efficient task embedding that can accurately measure the similarity between different tasks. We release GNN-BANK-101, a large-scale database that records detailed GNN training information of 120,000 task-model combinations. We hope this work can facilitate and inspire future research in efficient AutoML to make deep learning more accessible to a general audience.
ACKNOWLEDGEMENTS
We thank Xiang Lisa Li, Hongyu Ren, Yingxin Wu for discussions and for providing feedback on our manuscript. We also gratefully acknowledge the support of DARPA under Nos. HR00112190039 (TAMI), N660011924033 (MCS); ARO under Nos. W911NF-16-1-0342 (MURI), W911NF-16-10171 (DURIP); NSF under Nos. OAC-1835598 (CINES), OAC-1934578 (HDR), CCF-1918940 (Expeditions), NIH under No. 3U54HG010426-04S1 (HuBMAP), Stanford Data Science Initiative, Wu Tsai Neurosciences Institute, Amazon, Docomo, GSK, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, and Toshiba. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding entities.
A ADDITIONAL IMPLEMENTATION DETAILS
Runtime analysis. We have empirically shown that AUTOTRANSFER can significantly improve search efficiency by reducing the number of trials needed to achieve reasonably good accuracy. The only overhead we introduced is the procedure of estimating task embeddings. Since we use a randomly initialized architecture, extracting each task feature only requires at most one forward and a few backward passes from a single minibatch of data. The wall-clock time depends on the size of the network and data structure. In our experiments, it typically takes a few seconds on an NVIDIA T4 GPU. We repeat the task feature extraction process 5 times for each anchor model, and for a total of 12 anchor models. Thus, the wall-clock time of the overhead for computing task embedding of novel task is within a few minutes. We note the length of this process is generally comparable to one trial of training on a small-sized dataset, and the time saved is much more significant for large-scale datasets.
Details for task-model-bank training. Our GNN model specifications are summarized in Table 3. Our code base was developed based on GraphGym (You et al., 2020). For all the training trials, We use the Adam optimizer and cosine learning rate scheduler (annealed to 0, no restarting). We use L2 regularization with a weight decay of 5e-4. We record losses and accuracies for training, validation and test splits every 20 epochs.
Training details for AUTOTRANSFER. We summarize the training procedure for the projection function g(·) in Algorithm 2. We set U = 12 and R = 5 throughout the paper. We use the same set of anchor model designs as those in GraphGym. We use a two-layer MLP with a hidden dimension of 16 to parameterize the projection function g(·). We use the Adam optimizer with a learning rate of 5e-3. We use margin = 0.1 and train the network for 1000 iterations with a batch size of 128. We adopt K = 16 when selecting the top K trials that are used to summarize design distributions.
Details for adapting TPE, evolution algorithms. We hereby illustrate how to compose the search algorithms with the transferred design priors. TPE is a Bayesian hyperparameter optimization method, meaning that it is initialized with a prior distribution to model the search space and updates the prior as it evaluates hyperparemeter configurations and records their performance. We replace this prior distribution with the task-informed design priors. As evolutionary algorithms generally initialize a large population and iteratively prune and mutate existing networks, we replace the random network initialization with the task-informed design priors. As we mainly focus on the few trial search regime, we set the fully random warm-up trials to 5 for both TPE and evolution algorithms.
B ADDITIONAL DISCUSSION
Limitations. In principle, AUTOTRANSFER leverages the correlation between model performance rankings among tasks to efficiently construct model priors. Thus, it is less effective if the novel task has large task distances with respect to all tasks in the task-model bank. In practice, users can continuously add additional search trials to the bank. As the bank size grows, it will be less likely that a novel task has a low correlation with all tasks in the bank.
Algorithm 2 Training Pipeline for the projection function g(·)
Require: Task features {z(i)f |T (i)} extracted for each task from the task-model bank. Distance measure dg(·, ·) estimated in GraphGym.
1: for each iteration do 2: Sample T (i), T (j), T (k) 3: z(i)e , z (j) e , z (k) e ←− g(z(i)f ), g(z (j) f ), g(z (k) f ) 4: y ←− 1 if dg(T (i), T (j)) < dg(T (i), T (k)) else −1 5: Optimize objective function Lr(z(i)e , z(j)e , z(k)e , y) in Eq. 2 6: end for
Social Impact. Our long-term goal is to provide a seamless GNN infrastructure that simplifies the training and deployment of ML models on structured data. Efficient and robust AutoML algorithms are crucial to making deep learning more accessible to people who are interested but lack deep learning expertise as well as those who lack the massive computational budget AutoML traditionally requires. We believe this paper is an important step to provide AI tools to a broader population and thus allow for AI to help enhance human productivity. The datasets we used for experiments are among the most widely-used benchmarks, which should not contain any undesirable bias. Besides, training/test losses and accuracies are highly summarized statistics that we believe should not incur potential privacy issues.
C ADDITIONAL RESULTS
Search efficiency. We summarize the average number of trials needed to surpass the average best accuracy found by TPE with 30 trials in Table 4. We show that AUTOTRANSFER reduces the number of explored architectures by an order of magnitude.
Ablation study on number of anchor models and task embedding design. We empirically demonstrate how the number of anchor models affect the rank correlation in Table 5. While 3 anchor models are not enough to capture the task distance, we found that 9 and 12 have a satisfactory trade-off between capturing task distance and computational efficiency. Furthermore, we empirically in Table 6 demonstrate that the learned task embedding space is superior to the proposed task feature space in terms of correlation as well as final search performance.
Visualizing model designs. We visualized the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset in Figure 5, as well as Coauthor-Physics dataset in Figure 6. We could observe that the transferred design distributions have a positive correlation on most of the design choices.
Figure 5: We plot the transferred design distributions and ground truth design distributions on the TU-PROTEINS dataset.
1 2 0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.layers_pre_mp Transferred Design Prior Ground Truth Prior
2 4 6 8 0.0
0.1
0.2
0.3
0.4
gnn.layers_mp Transferred Design Prior Ground Truth Prior
2 3 0.0
0.2
0.4
0.6
0.8
gnn.layers_post_mp Transferred Design Prior Ground Truth Prior
ski pco
nca t
ski psu
m sta
ck 0.0
0.2
0.4
0.6
0.8
1.0 gnn.stage_type Transferred Design Prior Ground Truth Prior
ad d
ma x
me an
0.0
0.1
0.2
0.3
0.4
gnn.agg Transferred Design Prior Ground Truth Prior
25 6 64
0.0
0.2
0.4
0.6
0.8
gnn.dim_inner Transferred Design Prior Ground Truth Prior
ga tco
nv
ga tco
nv 2h
ea d
ga tco
nv 4h
ea d
ge ne
ral con
v
gin con
v
sag eco
nv
gcn con
v 0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
gnn.layer_type Transferred Design Prior Ground Truth Prior
elu
lre lu_
01 pre lu rel u
0.0
0.1
0.2
0.3
0.4
0.5
0.6
gnn.act Transferred Design Prior Ground Truth Prior
0.0 01 0.0
1 0.0
0.1
0.2
0.3
0.4
0.5
0.6
optim.base_lr Transferred Design Prior Ground Truth Prior
16 00 20 0 80
0 0.0
0.1
0.2
0.3
0.4
0.5 optim.max_epoch Transferred Design Prior Ground Truth Prior
Figure 6: We plot the transferred design distributions and ground truth design distributions on the Coauthor-Physics dataset. | 1. What is the focus and contribution of the paper on AutoTransfer?
2. What are the strengths and weaknesses of the proposed transfer learning approach?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Why did the authors choose specific datasets for evaluation, and why are the results not reported on other datasets?
5. What is the significance of the benchmark dataset provided by the authors, and how might it facilitate research on transfer learning graph neural networks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper "AutoTransfer: AutoML with Knowledge Transfer - An Application to Graph Neural Networks" makes two major contributions: First, the authors devise a benchmark for searching graph neural networks, consisting of 12 task and comprising 120,000 evaluated task-model combinations. Second, they propose a method for transfer learning, suggesting a prior based on past experience for searching neural architectures on a new dataset.
Strengths And Weaknesses
[+] The provided benchmark dataset is valuable and hopefully facilitates and enables research on transfer learning graph neural networks.
[+] The proposed transfer learning approach seems to perform reasonably well refining the state of the art.
[+] To the best of my knowledge, the proposed method is novel.
[-] The authors state that all GNNs can be described in terms of three stages: pre-processing with MLPs, message passing layers, and post-processing with MLPs. However, I would argue that this is quite restrictive and not able to reflect all possible architectures. See for example
Lee, Junhyun, Inyeop Lee, and Jaewoo Kang. "Self-attention graph pooling." International conference on machine learning. PMLR, 2019.
and
Ma, Yao, et al. "Graph convolutional networks with eigenpooling." Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining. 2019.
where pooling operations are added between the convolutions resulting in iterative changes of the graph structure. Of course, such a benchmark database needs to be restricted in some sense so that the set of candidates is reasonably limited. However, I would suggest to rephrase the statement in Section 4.2 a little bit so that it makes clear that the benchmark limits itself to such GNNs.
[-] It is not clear why the authors chose the mentioned datasets for evaluation and why results on the other datasets are not reported.
Clarity, Quality, Novelty And Reproducibility
There is already another transfer learning approach named "Auto-Transfer" and frankly speaking the name is not very innovative or self-describing: Murugesan, Keerthiram, et al. "Auto-Transfer: Learning to Route Transferrable Representations." arXiv preprint arXiv:2202.01011 (2022).
Why is only Kendall's correlation coefficient considered? There could be different degrees of similarity, something like weak similarity where only a top-k ranking needs to be consistent. However, since one is not interested in the precise ranking of low performers, one could argue that this is less relevant for comparing similar datasets. Maybe a correlation measure such as normalized discounted cumulative gain (NDCG) would be a better choice here? The authors state that they see a justification for Kendall's tau in the results but do not discuss other ranking correlation measures.
Clarity
The clarity is overall quite good. Language is clear and the paper is well structured.
Quality and Novelty
The quality of the paper is very good and the approach is novel - at least to the best of my knowledge.
Reproducibility
Important details regarding the methods evaluated are missing. In particular how competitors have been parameterized, what kind of evolution was used as a baseline and how it was configured etc. |
ICLR | Title
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Abstract
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many finetuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resourceand parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning by enforcing sparsity-aware weight updates on top of the pretrained weights; and (ii) resource-efficient inference by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and `1 sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about 35% inference FLOPs savings with < 1% trainable parameters and comparable performance to conventional fine-tuning. 1
1 INTRODUCTION
Most recent NLP applications have been following the pre-train then fine-tune paradigm, where we start from a gigantic pre-trained model and fine-tune it towards downstream tasks. Conventional fine-tuning works by updating all of the parameters of the pre-trained model. As the size of pretrained models grows, updating all parameters becomes less feasible in most practical scenarios, due to the expensive memory and computational requirements. For example, BERTBASE (Devlin et al., 2019) has 110M trainable parameters, while GPT-2 (Radford et al., 2019) has up to 1.5B and the largest version of GPT-3 (Radford et al., 2019) has an astonishing 175B trainable parameters. As such, conventional fine-tuning of the larger models could require hundreds of GPU hours. Another downside of this paradigm is that it requires storing as many parameters, as in the large-scale pretrained models, for each downstream task. That poses impediments to the deployment in real-world resource-constrained environments.
One solution to address the extensive resource requirement of conventional fine-tuning is model pruning (LeCun et al., 1990; Han et al., 2015a; Ren et al., 2018; He et al., 2017; Liu et al., 2017), where unnecessary weights are eliminated to shrink the model size. For example, Chen et al. (2021b) leverages `1 regularization to remove insignificant attention heads and gains 35% ∼ 45% training time with comparable performance. Guo et al. (2020) learns sparse task-specific “diff” vectors for various downstream fine-tuning tasks, leading to great memory savings. All these studies indicate the rise of sparsity naturally during fine-tuning a general-purpose pre-trained model,
1All codes are provided in the supplement.
to some specialized downstream functionality. One potential interpretation, of why sparsity arises, is that different subsets of the parameters may be responsible for different downstream tasks and data domains (Sanh et al., 2020). However, identifying appropriate sparse pruning masks requires burdensome training of models with full weights, which can still be unaffordable for many practitioners. For example, fine-tuning a large pre-trained language model like GPT-3 for just one step consumes at least 1.2TB of VRAM and requires 96 pieces of NVIDIA Tesla (Hu et al., 2021).
One parallel alternative is designing parameter-efficient fine-tuning algorithms, which aims to only optimize a small portion of weights while fixing most of the pre-trained weights during the downstream task training step. Pioneering works along this line, which utilize adapters (Houlsby et al., 2019) or low-rank decomposition (Hu et al., 2021), can significantly reduce the number of trainable parameters while preserving good fine-tuning performance. Introducing only a small number of task-specific parameters for each new downstream task can substantially improve the memory and deployment efficiency of models as it allows us to reuse the remaining unchanged/shared parameters. However, there are two major hurdles of current parameter-efficient fine-tuning: (i) it does not yield any inference efficiency gains since the full pre-trained weights are still required to calculate outputs; and (ii) current methods assume the weight update to be either sparse (Guo et al., 2020) or low-rank (Hu et al., 2021), yet those assumptions might be oversimplified and overly restricted to allow for effective updates. For example, the low-rank assumptions on weight matrices might be overly strong since some weights cannot be fitted in the low-rank space (Yu et al., 2017). Recently Chen et al. (2021a) also find that using both sparse and low-rank components performs better than either of them. These observations have inspired us to explore better parameter-efficiency methods.
To tackle both resource- and parameter-efficiency issues of large model fine-tuning, we explicitly draw on the prior of sparsity for both weight updates and the final weights, and establish a dually sparsity-embedding efficient tuning (DSEE) framework. From a pre-trained model, DSEE first adopts a sparsity-aware low-rank weight update to achieve parameter efficiency of the fine-tuning process; and then enforces a sparse weight structure by masking to achieve resource efficiency of the fine-tuned model at inference time. Our contributions can be summarized as follows:
• We propose the dually sparsity-embedding efficient tuning (DSEE), which unifies sparsityaware weight update and sparse pretrained weight in fine-tuning gigantic pre-trained models. It is the first attempt towards jointly optimizing both parameter efficiency of the finetuning process, and the resource efficiency of the fine-tuned model.
• We exploit both unstructured and structured sparse patterns in the DSEE framework. For weight updates, we find the injected sparsity prior to greatly enhance existing parameterefficient update schemes, and to further trim down the needed amount of trainable parameters. For the final weights, we learn well-performed sparse masks from the pre-trained weights, leading to substantial computation and memory reductions at inference.
• Extensive experiments demonstrate the effectiveness of our proposal across various representative pre-trained languages models, such as BERT, GPT-2, and DeBERTa; and on diverse evaluation benchmarks, such as E2E, DART, WebNLG, and GLUE. Specifically, on (E2E, Dart, WebNLG), our methods can achieve a BLUE score of (69.75, 55.40, 46.66) with less than 1% of total trainable parameters. On BERT, our method can save about 35% FLOPs, compared to traditional downstream fine-tuning.
2 RELATED WORK
Pre-trained language models. Transformer (Vaswani et al., 2017) is a sequence-to-sequence model that heavily uses the concept of self-attention. Later on, numerous transformer-based models have been proposed and show overwhelming performance on natural language processing (NLP) and on computer vision (CV) tasks. Devlin et al. (2019) designed BERT that pre-train deep bidirectional representations from unlabeled text and reached powerful performance. Liu et al. (2019) found that BERT were terribly undertrained and proposed RoBERTa, an enhanced training recipe for BERT which can greatly boost the performance. He et al. (2020) proposed decoding-enhanced BERT with disentangled attention (DeBERTa) that incorporates the disentangled attention mechanism and an improved mask encoder to enhance BERT and RoBERTa. More variants like XL-Net, Albert, and Electra have also been proposed in recent years (Yang et al., 2019; Lan et al., 2019; Clark et al., 2019). The series of GPT models (Radford et al., 2019; Brown et al., 2020) are later developed based on transformers decoder blocks rather than encoder blocks like BERT, which again have
shown superior performance on different tasks. These large models pretrained on a large amount of unlabelled texts would need to be further fine-tuned on downstream tasks for better performance. One of the accompanying disadvantages of these pre-training models with tremendous parameter counts (e.g., 175B in GPT-3) is the unaffordable computational cost for further fine-tuning.
Pruning and Low-rank decomposition. Pruning is a widely-used model compression technique. It can reduce the number of parameters inside models, which possibly brings training and inference efficiency. Along with weight pruning method (Han et al., 2015b) being one of the most effective methods (Gordon et al., 2020), various criterion have been proposed to select insignificant weights for pruning, such as Taylor approximation (Molchanov et al., 2019), Hessian score approximation (Hassibi & Stork, 1993), and other saliency scores such as SNIP (Lee et al., 2018), GraSP (Wang et al., 2019) and SynFlow (Tanaka et al., 2020). Several pruning methods have been commonly adapted to compress language models (McCarley et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2020; Chen et al., 2021b). Specifically, McCarley et al. (2019) proposed to prune attention heads that had less contribution to the model. Wang et al. (2020) pruned BERT models by involving low-rank factorization and `0 regularization. Sanh et al. (2020) invented an improved version of magnitude pruning (i.e., pruning based on the weight change) that can better suit the transfer learning. Chen et al. (2021b) performed structured pruning on BERT via `1 sparse regularization, which reduced a large portion of parameters and decreased the training cost.
Low-rank approximation (Ye, 2005) is also vastly studied. One classical scenario is robust principle component analysis (Candès et al., 2011), which decomposes a matrix into a low-rank plus a sparse component. Existing literature shows that in deep learning, the learned over-parameterized models often naturally bear approximate low-rank weight structures (Oymak et al., 2019; Yu et al., 2017). Some (Jaderberg et al., 2014; Povey et al., 2018; Sainath et al., 2013; Zhang et al., 2014; Zhao et al., 2016) have explicitly imposed the low-rank constraint during training. Wang et al. (2020); Hu et al. (2021) utilized low-rank decomposition to shrink the model size and trim down the trainable parameters during fine-tuning. However, to our best knowledge, integrating sparsity and low-rank structures has never been studied before for efficient fine-tuning of pre-trained language models.
Parameter-efficient adaptation. Parameter-efficient adaptation aims at reducing the number of trainable parameters when fine-tuning the models across different downstream domains. Unlike pruning, it generates updates that can be represented by fewer parameters instead of building sparse models. Various approaches are invented to achieve the goal. Rebuffi et al. (2017); Houlsby et al. (2019) inserted and only trained adapters between existing layers, whose parameters are much less compared to the pretrained models. Guo et al. (2020) leveraged `0 regularization to limit the number of non-zero elements in the update vectors. Lester et al. (2021); Li & Liang (2021) introduced efficient prompt tuning which optimizes only a small continuous task-specific vector. Hu et al. (2021) proposed a low-rank decomposition-based method that can also significantly reduce the number of trainable parameters. However, fine-tuned models yielded by these methods work have the same amount of weights as the pre-trained starting point; hence they contribute no resource efficiency of the final model.
3 METHODOLOGY
In this section, we begin by describing our notations and definitions of sparsity generation and parameter-efficient fine-tuning in Section 3.1. Then, we introduce the (dually) sparsity-embedded efficient fine-tuning algorithms in Sections 3.2 and 3.3.
3.1 PRELIMINARIES
Sparsity generation and resource-efficient fine-tuning. We adopt both unstructured and structured pruning methods to produce sparsity. They can lead to resource-efficiency including memory and computation savings.
Let W ∈ Rm×n denote a weight matrix. The goal of pruning is to find a binary mask S ∈ {0, 1}‖W‖0 , where ‖W‖0 is the number of parameters in W . The mask S is applied to W and results in a sparse weight W S. For unstructured pruning, only memory cost is saved; but for structured pruning, it helps save computational cost since the sparse weights can be smaller in size by wiping out all-zero columns or rows. However, the performance of networks after structured pruning is often shown to be inferior compared with the unstructured pruning counterpart.
Parameter-efficient fine-tuning. To leverage the knowledge in pre-trained weights W , downstream models learn task-specific weight update ∆W via fine-tuning and generate predictions with weights W + ∆W . The output of models is therefore calculated as (W + ∆W)x where x is the input. Since ∆W has the same size of W , learning the update matrices usually requires massive resources as the size of the pre-trained model increases. Parameter-efficient fine-tuning try to solve this problem by using as few trainable parameters as possible to represent ∆W , while maintaining competitive downstream fine-tuning performance. Previous literature reaches the goal via either sparsifying weight update matrices ∆W (Guo et al., 2020) or leveraging low-rank decomposed matrices to compute ∆W (Hu et al., 2021), while in our work we combine both of them.
Algorithm 1: Sparsity-Embedded LowRank Decomposition Input: Pretrained weightsW , number of non-zero elements N Output: Sparse matrices S2 1 Initialize S2 to be an empty set. 2 for each self-attention projection weights wi inW do /* Decomposition */ 3 Perform matrix decomposition: wi ≈ UV + S ′ by solving the optimization problem in Eqn.1. /* Identify important elements to form S2 */ 4 Perform thresholding on S ′: Keep N
elements in S ′ with top magnitudes, and set the rest 0.
5 Append S ′ into S2. 6 end
Algorithm 2: DSEE Input: Pretrained weightsW , number of
non-zero elements N , desired sparsity s, loss function L
Output: Sparse mask S1, matrices U ,V,S2 1 DecomposeW into U ,V and S2 2 (Re-)Initialization: U = 0,V ∼ N (0, 0.02),Ω = indexes of non-zero elements in S2,S = 0 /* I: train before pruning */ 3 Train U ,V,S with respect to L under the constraint of PΩC (S) = 0. /* II: pruning the model */ 4 Prune (1-s%) parameters inW globally by sorting the magnitude ofW + UV + S, deriving the sparsity mask S1 /* III: tuning after pruning */ 5 Tune U ,V,S2 for E epochs for recovering the performance.
3.2 SPARSITY-EMBEDDED PARAMETER-EFFICIENT FINE-TUNING
A recent study (Hu et al., 2021) enforces low-rank constraint to weight update tensors ∆W , and obtains a satisfactory trade-off between parameter-efficiency and model quality. However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside the low-rank subspace, creating sparse “residuals”. Inspired by these observations, we investigate a new sparsity-aware low-rank subspace of ∆W , and introduce the first component of our proposal in Figure 1, i.e., sparsity-embedded parameter-efficient fine-tuning.
Specifically, we identify a sparse subspace Ω, that we can project our update matrices ∆W to. The update matrix is then decomposed into two components: a low-rank part which is represented by the multiplication of two low-rank matrices U ∈ Rm×r and V ∈ Rr×n, and a sparse residual
S2 = PΩ(∆W), where PΩ(∆W) =
{ si,j , (i, j) ∈ Ω
0, (i, j) ∈ ΩC , i = 1, 2, . . . ,m, j = 1, 2, . . . , n.
As illustrated in Figure 1, the update matrix ∆W can be approximated by UV + S2, in which the U , V , and the non-zero element of S2 are learnable parameters while Ω is fixed once determined. Compared to the full fine-tuning schemes has m × n individual trainable parameters, our method only has (m+n)×r+card(Ω). If r is smaller than m×n−card(Ω)m+n / 0.5 min{m,n}, which is a loose bound, our method is capable of substantially reducing trainable parameters for downstream finetuning. In practice, the value of r is very small compared tom and n so the savings are considerable.
The next question is how to find appropriate Ω. Motivated by (Candès et al., 2011), we formulate our sparsity-aware decomposition as the following optimization problem:
min U,V,S2
1 2 ‖W − UV − S2‖2F
s.t. rank(U) ≤ r, rank(V) ≤ r, card(S2) ≤ c, (1)
where rank(·) indicates the rank and card(·) indicates the cardinality of a matrix. Here we derive Ω from pre-trained weights W , which is different from previous low-rank techniques for model compression since we perform prior-training decomposition and can not access ∆W before finetuning. In this way, we actually assume that ∆W shares a similar crucial subspace Ω with W (Sun et al., 2018). We use the GreBsmo algorithm (Zhou & Tao, 2013) to solve this optimization problem. This method adopts a similar idea as random projection and can solve the optimization problem rapidly. The details of the algorithm are in Appendix.
Algorithm 1 summarizes the detailed procedure of our proposal. We first decompose the pretrained weightsW into three matrices: U , V and S2. U and V are matrices of low-rank and S2 is a sparse matrix. After the decomposition, we re-initialize U and V to be of shape Rm×r and Rr×n. The initial values for U are set to 0, and the initial values for V will follow a normal distribution as Hu et al. (2021) did. We do not use the decomposed value inside S2, but only the index of non-zero elements. We collect the indexes of non-zero elements into Ω, and reset the values of S2 back to 0. During the training, we only update the elements of S2 whose indexes are in Ω, as well as U and V .
3.3 DUALLY SPARSITY-EMBEDDED EFFICIENT TUNING (DSEE)
Besides parameter-efficiency, resource-efficiency is another major goal of our proposed framework. Sparsity-embedded low-rank weight updates alone do not necessarily lead to computational cost reductions since it works together with the dense pre-trained weights and the massive computation during forwarding processes is inevitable. To achieve both resource- and parameter-efficiency, we also promote a sparse weight structure to the final fine-tuned model, as demonstrated by the sparse mask S1 in Figure 1. Our DSEE explores both unstructured/structured sparsity patterns as follows. B Pruning with unstructured sparse masks. The unstructured sparse mask is the most widely used mask since it usually produces almost undamaged performance compared to its dense counterpart (Han et al., 2015a). It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016). Specifically, based on the weight matrixW , a sparse mask S1 can be calculated with various approaches, either heuristic or optimization-based. We adopt the one-shot magnitude pruning (Han et al., 2015a) in DSEE, due to its simplicity and competitive performance.
In our context, we create the sparse pattern S1 from W + UV: First, it sorts the magnitude (i.e., absolute value) of individual weights and removes parameters with bottom 1 − k% magnitude. Second, we further tune the value of the low-rank update matrices for a few epochs, which is important to keep the performance as stated in Han et al. (2015b). Third, the sparse mask S1 that we derive will be applied on the pretrained weights only and do not affect the output from the update matrices, as W S1 + UV . For an input sample x, the output is calculated by (W S1 + UV + S2)x = W S1x + (UV + S2)x, where the first term W S1x brings significant resource-efficiency, and the second term brings parameter-efficiency.
B Pruning with structured sparse masks. The second method exploits structured sparse masks, whose regular patterns are more hardware friendly yet have worse performance than unstructured masks. Our DSEE considers `1 sparse regularization (Liu et al., 2017; Chen et al., 2021b) to craft high-quality structurally pruned subnetworks. More precisely, motivated by Chen et al. (2021b), we introduce learnable coefficients ξ before each attention head module, and append a `1 sparse penalty on ξ to the original loss. Detailed formulations are depicted as below:
We add trainable coefficients c before attention heads. The parameters c are optimized together with the decomposed matrices, i.e., U ,V and S2. An extra term λ‖c‖1 will be added to the training loss for sparse regularization. The value of λ is set to 1e-4. After training, we prune the attention heads that have the lowest contribution to the model (i.e., lowest c). We use a layer-wise pruning scheme that prunes the same proportion of heads in each attention layer. After pruning, several epochs of tuning are run to recover the performance of the pruned model. The size of update matrices will change after structured pruning, so we also need to change the dimension of U and V . Specifically, we change the size of V from Rr×n to Rr×[n×s] where s is the pruning ratio of the corresponding self-attention layer and [x] is the biggest integer not greater than x. The size of S2 is shrunk accordingly.
4 EXPERIMENT RESULTS
Datasets and models. We use three classical pre-trained language models in our experiments: BERTBASE (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and DeBERTa-large (He et al., 2020), which have 12/24/24 layers with hidden size of 768/1024/1024 and 110/354/380M trainable parameters, respectively. For BERT and DeBERTa, we evaluate our method on the GLUE benchmarks (Wang et al., 2018). For GPT-2, we use E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Nan et al., 2021) for evaluations.
Training and evaluation details. For BERT and DeBERTa, we follow the default settings in Wolf et al. (2019); Devlin et al. (2019). We use the AdamW (Loshchilov & Hutter, 2017) optimizer for downstream training. The batch size for BERT and DeBERTa is 32, 8 per GPU. We train three epochs to search the sparse mask S1, and continue to tune the model for three epochs to converge. The initial learning rates are reported in A1, and we linearly decay them. For GPT-2, we follow the same hyperparameters as in Hu et al. (2021). We train the model for five epochs to search for the mask, and further tune the pruned model for two epochs.
Evaluation Metrics. For tasks on BERT and DeBERTa, we use the accuracy, Matthew’s Correlation, and Pearson’s r in the evaluation by default, which is also the conventional setting in the NLP community. On GPT-2, we use BLEU (Papineni et al., 2002), METEOR (Denkowski & Lavie, 2014), TER (Snover et al., 2006) and NIST (Doddington, 2002) as our evaluation metrics. To evaluate the efficiency of models, we use the number of trainable parameters to measure the parameter efficiency, use Sparsity in Pretrained Weights to measure the resource efficiency, and FLOPs to evaluate the computational efficiency. We add a star sign (∗) to indicate the structured sparsity (i.e., structurally prune the pretrained weights).
Baselines. On BERT and DeBERTa, we conduct comparisons with the following baseline methods: ¶ Fine-tune: directly fine-tunes the full model; · EarlyBERT (Chen et al., 2021b); ¸ BERT Tickets (Chen et al., 2020); ¹ OMP: prunes the fine-tuned weights by magnitude, and fine-tune afterwards; and º LoRA (Hu et al., 2021). We also report the Huggingface’s fine-tuning results.
On GPT-2, we conduct comprehensive comparisons with multiple baseline methods: ¶ Adapters (Houlsby et al., 2019): insert adapters after linear layers; · FT-Top2: fine-tune the top 2 layers only; ¸: Prefix: prefix tuning introduced by Li & Liang (2021); and ¹ LoRA: low-rank decomposition, which assumes ∆W = UV; most results are directly cited from Hu et al. (2021).
4.1 EFFICIENT TUNING WITH DSEE
Parameter-efficiency with sparse masks. To verify that using simple low-rank adaptation (i.e., LoRA) has limitations, we compare its performance with the performance of our sparsity-embedded efficient fine-tuning. Table 1 proves that on MNLI, SST-2, CoLA, and STS-B, the simple decomposition form shows unsatisfactory results compared to sparsity-embedded decomposition. Adding a small proportion of parameters (only 3072 trainable parameters) can bring a performance boost to the model. Specifically, under
rank 8, the metrics increase (0.69/0.13/0.008/0.003) on SST-2/MNLI/CoLA/STS-B, respectively. Moreover, using only approximately half of the number of trainable parameters, our method can achieve comparable parameters compared to the state-of-the-art method LoRA.
Table 2 shows the performance on GPT-2. For all three tasks, our method can achieve comparable results with only about half of the trainable parameters with LoRA with rank four and make substantial performance gain compared to LoRA with rank two. On WebNLG, our method even achieves a higher
BLEU score, 55.56 versus 55.29, with half of the trainable parameters. Such a phenomenon indicates that adding a sparse matrix to the low-rank decomposition can make substantial improvement.
Resource- and parameter-efficiency with sparse masks. We report the performance of DSEE, including both the number of trainable parameters and the sparsity in pretrained weights. We use an unstructured sparsity of 50%, and structured sparsity of 25% and 33%, which is equal to pruning 1/4 and 1/3 heads from each attention layer. For BERTBASE and DeBERTa-large, we set the r (the low-rank dimension) to be 16, and N (the number of non-zero elements in S2 ) to be 64. We also prune each of the intermediate layers using a structured sparsity of 40% as in (Chen et al., 2021b). For GPT-2, we set r to be 2 in DSEE and 4 in LoRA. The choice of N remains 64.
Table 3 summarizes the performance on BERTBASE. On BERTBASE, our DSEE method can: ¶ achieve parameter-efficiency at high level and retain the performance on various downstream tasks. On BERTBASE, with only about 600K trainable parameters (110/0.5929 ≈ 200× smaller than the full model), DSEE can achieve comparable performance on all GLUE tasks. · DSEE can achieve resource-efficiency in the final models. Using either unstructured or structured sparse masks can reach our goal of reducing the number of parameters in the pretrained weights without sacrificing much performance. At the sparsity level of 50%, unstructured DSEE can achieve an accuracy of 90.46% on QNLI, surpassing the baseline of fine-tuning, whose result is 90.15%. At the sparsity level of 25%, the structured DSEE can have performance gains ranging from 0.46% to 2.22%, except for QQP and QNLI. At the sparsity level of 33%, the performance gains range from 0.11% to 2.04%, except for QQP and QNLI. ¸ Compared to other baselines, e.g., EarlyBERT and BERT Tickets, our method benefits from parameter efficiency, which updates the weights more efficiently. These observations validate the effectiveness of our DSEE method.
We also calculate the inference FLOPs of BERTBASE on STS-B dataset. The original BERTBASE on STS-B takes FLOPs of 3.7835 × 1014, while LoRA takes FLOPs of 3.8096 × 1014, which is 0.69% higher. Conversely, the structured version of DSEE takes a FLOPs of 2.4921 × 1014 at the sparsity of 25%, and 2.3867× 1014 at the sparsity of 33%, which is 34.61% and 37.38% lower than LoRA. This indicates that a large proportion of computational cost can be saved at inference phase.
Table 4 summarizes the performance on GPT-2, which shares a similar trend. Unstructured DSEE can achieve 2000× reduction in trainable parameters and 2× reduction in the final fine-tuned model size with almost no loss in downstream task performance compared to conventional finetuning. When compared to LoRA, the unstructured DSEE can retain the same level of the number of trainable parameters and downstream task performance, while showing a 2× reduction in the final finetuned model size. The structured DSEE on GPT-2 seems to be less competitive than on BERT, but it can still hold performance on E2E and WebNLG after pruning 25% of heads in attention modules.
Finally, we validate our method on DeBERTa. The results are displayed in Table 5. We use four datasets, CoLA, MNLI, MRPC, and RTE. They have greatly varied sizes, which are representatives of the GLUE benchmark. DeBERTa is a larger model, so applying low-rank decomposition with our
hyperparameters cannot match the performance of fine-tuning. Compared to LoRA, our method reaches higher performance on downstream tasks, albeit it needs slightly more training epochs. However, such a slight extra cost for searching the sparse mask S1 can be amortized by the efficiency gained in the future inference, since the burden on resources such as storing the pretrained weights is relieved.
4.2 UNDERSTANDING DSEE
The position of embedded sparsity masks plays a crucial role in our proposed DSEE. For instance, applying sparse masks to the pre-trained weights or weight updates produces resourceand parameter-efficiency. Precisely, we compare four different methods on BERTBASE:
one-shot magnitude weight pruning (W S1), two DSEE’s variants (W S1 + UV) and W + UV + S2), DSEE (W S1 + UV + S2). The results are collected in Table 6. We can see that: ¶ NO embedded sparsity in the pretrained weights yields the overall best performance. This is intuitive since the valuable knowledge learned from massive pretraining data is intact and not removed. · Embedding sparsity into the pretrained weights harms only little to no performance. A similar conclusion is drawn by Chen et al. (2020) which validated that a pruned model can have a similar performance as the dense model. ¸ Using sparsity-embed efficient fine-tuning with the sparse pre-trained weights can also preserve performance, as well as achieve parameter efficiency.
4.3 ABLATION AND VISUALIZATION
Different methods to generate sparse matrix S2 In this section, we compare the performance of BERTBASE using different methods to generate the sparse matrix S2 and the corresponding space Ω. Except for the aforementioned matrix decomposition method, we try to (1) randomly sample indexes into Ω; and (2) directly select the indexes of elements of highest magnitude ofW into Ω. In Figure 2 we can see that using the matrix decomposition method has the highest metric overall. It is also noteworthy that sparse matrices generated by other methods will sometimes harm the performance.
Another factor is the number of non-zero elements inside S2. More non-zero elements in S2 reduce the parameter efficiency, while less non-zero elements increase the efficiency but may not be much beneficial to the accuracy or other metrics. Figure 2 also shows the relationship between number
of non-zero elements in S2 and the performance on fine-tuned model on SST-2. From the graph, we can see that using an N of 64 seems to have the most stable results compared to other choices. Another important piece of information we can derive from the figure is that a higher number of non-zero elements in the S2 does not guarantee better performance.
Ablation of # ranks. The rank of lowrank decomposition r is crucial to the transfer learning performance. A small r will result in lower representation ability, and a large r will bring more parameters and reduce the parameter efficiency. To find the best value, we conduct experiments with different ranks r on four datasets, SST-2, MNLI, CoLA, and STSB. The results are displayed in Figure 3. We add quadratic trend lines in the graph along with the discrete points to smooth
the results. We can draw two conclusions from the graphs: ¶ Overall, the final performance is positively correlated with the number of trainable parameters; however, on some tasks (MNLI, CoLA) higher number of parameters (namely, the r for U and V) will lead to lower performance. · With a sparse matrix embedded, the performance of the trained models can be improved within a range of trainable parameters. On SST-2 and CoLA, the applicable range seems to be 104.5 ∼ 106. On STS-B, our method can consistently outperform the performance of using low-rank decomposition only. For MNLI, the two methods behave similarly while our method is slightly better.
Ablation of sparsity. Although creating sparsity in pretrained weights W does not change the number of trainable parameters of models, the different levels of sparsity control the number of non-zero elements and thereby influence the representation ability. Therefore, we conduct experiments on BERT with different sparsity, ranging from 10% to 60%. The results are in Figure A5. From the figures, we can draw conclusions: ¶ DSEE out-performs magnitude pruning at low sparsity (< 50%) with respect to the performance on down-
stream tasks. · DSEE surpasses vanilla magnitude pruning with respect to the number of parameters. For each self-attention layer, using vanilla magnitude pruning needs to train all the weights, while DSEE only needs to update the three low-rank matrices.
Visualizations. Figure 4 shows the distribution of weight change. From the graph, we can see that most weights are located around 0. Such a phenomenon indicates that a natural sparsity exists within the update matrices, motivating us to explore sparse structures along with matrix decomposition.
5 CONCLUSION
This paper draws on the prior of sparsity and establishes the DSEE framework. It is the first attempt towards jointly optimizing both parameter efficiency of the fine-tuning process, and the resource efficiency of the fine-tuned model. On state-of-the-art large-scale language models (e.g., BERT, GPT, and DeBERTa) and across several of datasets, DSEE consistently demonstrates highly impressive parameter, training, and inference efficiency, in addition to preserving a competitive downstream transfer performance. Our future work targets extending DSEE to the finetuning of large-scale computer vision and/or multi-modal pre-trained models.
A1 MORE IMPLEMENTATION DETAILS
We report the setting of other hyperparameters of our experiments, such as learning rate, in Table A7. The device we used for experiments are various, including NVIDIA GeForce GTX 1080 Ti, GeForce RTX 2080 Ti, Titan RTX, and A6000.
Architecture Method Parameters Dataset MNLI QNLI QQP SST-2 CoLA MRPC RTE STS-B
BERTBASE Fine-tune Learning Rate 5e-5 BERTBASE DSEE (before pruning) Learning Rate 5e-5 5e-5 5e-5 2e-4 1e-3 8e-4 1e-3 8e-4 BERTBASE DSEE (after pruning) Learning Rate 5e-5 5e-5 5e-5 5e-5 1e-3 5e-4 5e-4 5e-4
DeBERTa-large LoRA & DSEE (before pruning) Learning Rate 1e-5 - - - 1e-3 8e-4 8e-5 - DeBERTa-large DSEE (after pruning) Learning Rate 1e-5 - - - 5e-5 8e-4 6e-5 -
Table A7: Hyper-parameters we used on different datasets.
GreBsmo Algorithm GreBsmo (Zhou & Tao, 2013) is an algorithm for solving the Robust PCAlike methods. The optimization of U , V , and S follows the following iterative rules:
Uk = Q,QR ( (X − Sk−1)V Tk−1 ) = QR Vk = Q T (X − Sk−1)
Sk = Sλ (X − UkVk) , (2)
where X is the original dense matrix, QR(·) means the QR decomposition, Sλ(·) indicates the soft-threshold operation, and the subscripts k indicates the optimization step.
A2 MORE EXPERIMENT RESULTS
Ablation of Sparsity To study the relationship between sparsity of unstructured pruning and the behavior of our DSEE, we conduct an ablation study on various datasets in GLUE benchmarks. The results are shown in Figure A5
0.880
0.885
0.890
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE0.880
0.885
0.890
20 40 60 Sparsity
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE
ST SB
89.5
90.0
90.5
91.0
91.5
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
89.5
90.0
90.5
91.0
91.5
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
SS T2
83
84
85
86
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
83
84
85
86
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M R PC
67 68 69 70 71 72 73
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
67
68
69
70
71
72
73
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
R T E
80
81
82
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
80
81
82
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M N L I
0.52
0.56
0.60
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
C or
re la
tio n
Methods Magnitude Pruning DSEE0.52
0.56
0.60
20 40 60 Sparsity
C or
re la
tio n
Methods Magnitude Pruning DSEE
C oL A
Figure A5: DSEE performance compared to vanilla magnitude pruning at different sparsity. Magnitude Pruning: vanilla magnitude pruning which tunes W directly.
Results with more runs. We conduct experiments with three runs. The mean and standard deviation of accuracy of LoRA and DSEE (without pruning) are shown in Table A8. We have also included the p-value of t-tests to show the significance of the performance gaps. On half of the
A14
datasets, the p-value is around 0.1; and on three datasets the p-value is between 0.2 and 0.4. The reason why p-value is large on some datasets is probably because the number of experiments still has room to increase.
Table A8: Performance comparison of different methods on BERTBASE on GLUE benchmarks. The first row of each method indicates the mean accuracy, and the second row indicates the standard deviation.
Methods # Trainable Sparsity in DatasetParameters Pretrained Weights CoLA STS-B MNLI QQP QNLI MRPC RTE SST-2
LoRA 589.8K 0% 57.38 88.81 80.88 86.45 88.49 86.03 71.00 92.171.43 0.23 0.41 0.14 0.10 1.49 1.10 0.27
DSEE 592.9K 0% 58.73 89.16 81.03 86.53 88.58 85.78 71.12 92.550.95 0.03 0.62 0.06 0.17 1.07 0.62 0.12
P-value of matched pair t-tests 0.086 0.054 0.383 0.121 0.276 0.572 0.277 0.031
DSEE 592.9K 50% 55.60 88.39 81.16 87.23 88.88 84.88 70.09 91.000.81 0.21 0.23 0.03 0.30 0.93 1.37 0.13
Inference time on BERTBASE We have recorded the inference time of BERTBASE on various GLUE benchmarks, QQP, STS-B, CoLA and RTE. The number of samples in their evaluation set range from 277 to 40430, so they can be considered representative. The results are shown in Table A9. From the table we can see that, the inference time are greatly saved after using the structured version of DSEE.
Table A9: Inference time of different methods on BERTBASE on GLUE benchmarks. The sparsity with star signs indicates that it is a structured sparsity.
Dataset # eval samples MethodsLoRA DSEE (25%∗) DSEE (33%∗)
QQP 40,430 50.773 36.391 34.946 STS-B 1,500 2.033 1.506 1.337 CoLA 1,043 1.316 0.931 0.877 RTE 277 0.464 0.420 0.376
Convergence Speed We have compared the convergence speed (i.e., how the test accuracy changes) of LoRA and DSEE. From Figure A6 we can see that the convergence speeds of the two methods are not significantly different. This means introducing a sparse component into the decomposition form does not affect the convergence dynamics.
0.75
0.80
0.85
0.90
0 50 100 Evaluation Step
Te st
A cc
ur ac
y
Method
DSEE
LoRA
Figure A6: Convergence speed of two methods (LoRA and DSEE). The x-axis represents evaluation steps and y-axis represents the test accuracy.
A15 | 1. What is the focus of the paper regarding efficient inference in large pre-trained models?
2. What are the strengths and weaknesses of the proposed method in improving efficiency?
3. How does the reviewer assess the notation and explanations in the paper?
4. What are the concerns regarding the motivation and contributions of the work?
5. Do you have any questions about the experimental results and comparisons with prior techniques?
6. How does the reviewer evaluate the practical benefits and challenges of implementing the proposed approach? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes methods to improve the efficiency of large pre-trained models (millions or billions of parameters) by combining multiple notions of sparsity and/or dimensionality reduction on the model weights. This is achieved via (1) a low-rank update matrix with fine-tunable parameters, and (2) pruning the original pre-trained weight matrix. These two sets of parameters are combined during inference. The authors validate their proposed method on several model architectures and datasets.
Review
Strengths:
The topic of this paper is timely. Models are growing to immense sizes (at faster and faster rates), and the community is starting to come to a reckoning with their computational (and storage) budget. Reducing the impact both from a memory footprint, ability to fine-tune, and perform efficient inference can have immense impact.
As an empirical result, some of the findings in this paper points towards the proposed method being worthwhile to try in practice.
Weaknesses:
In general, the motivation and exact contributions of this work are hard to follow. In fact, I found it hard to understand what exactly the proposed method (and inference mechanism) is without first reading LoRA, the paper on which this work is based. For example, how outputs are even calculated are not explicitly spelled out (via equations) until Section 3.3. Personally, I also find the notation of
W
+
Δ
W
to be slightly confusing when also talking in the context of "sparse updates" etc: it makes me think that
Δ
W
is a repeated gradient update of
W
.
It seems like a core deviation of this work from an ensemble of prior techniques is the addition of the sparse residuals. As such, it seems appropriate that this should be highlighted more/expanded on in the main introduction for this work. On this note, I'm still not that convinced by the motivation, particularly on identifying the subspace
Ω
from the pre-trained
W
. Can you provide more explanation for why you assume
Δ
W
shares this subspace? Or why
Ω
isn't computed from the sparse version of
W
? Compared to random (Fig. 2), the performance gap seems fairly small, and is inconsistent across scales.
It's hard to judge the practical benefit of the proposed approaches simply by looking at the number of trainable parameters and the number of flops. Actually exploiting these reductions to the full extent is hard (as the authors noted as well). It's also not really clear where the frontier is on total parameters vs. trainable parameters vs. accuracy is. Is it possible to give some real efficiency (i.e., wall clock) times? This would help shed some more light on the actual benefits vs. remaining challenges (how to actually implement this real-time).
The benefit of
S
2
seems dubious. The differences are quite small, which makes me wonder if it is worth the extra complexity. |
ICLR | Title
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Abstract
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many finetuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resourceand parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning by enforcing sparsity-aware weight updates on top of the pretrained weights; and (ii) resource-efficient inference by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and `1 sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about 35% inference FLOPs savings with < 1% trainable parameters and comparable performance to conventional fine-tuning. 1
1 INTRODUCTION
Most recent NLP applications have been following the pre-train then fine-tune paradigm, where we start from a gigantic pre-trained model and fine-tune it towards downstream tasks. Conventional fine-tuning works by updating all of the parameters of the pre-trained model. As the size of pretrained models grows, updating all parameters becomes less feasible in most practical scenarios, due to the expensive memory and computational requirements. For example, BERTBASE (Devlin et al., 2019) has 110M trainable parameters, while GPT-2 (Radford et al., 2019) has up to 1.5B and the largest version of GPT-3 (Radford et al., 2019) has an astonishing 175B trainable parameters. As such, conventional fine-tuning of the larger models could require hundreds of GPU hours. Another downside of this paradigm is that it requires storing as many parameters, as in the large-scale pretrained models, for each downstream task. That poses impediments to the deployment in real-world resource-constrained environments.
One solution to address the extensive resource requirement of conventional fine-tuning is model pruning (LeCun et al., 1990; Han et al., 2015a; Ren et al., 2018; He et al., 2017; Liu et al., 2017), where unnecessary weights are eliminated to shrink the model size. For example, Chen et al. (2021b) leverages `1 regularization to remove insignificant attention heads and gains 35% ∼ 45% training time with comparable performance. Guo et al. (2020) learns sparse task-specific “diff” vectors for various downstream fine-tuning tasks, leading to great memory savings. All these studies indicate the rise of sparsity naturally during fine-tuning a general-purpose pre-trained model,
1All codes are provided in the supplement.
to some specialized downstream functionality. One potential interpretation, of why sparsity arises, is that different subsets of the parameters may be responsible for different downstream tasks and data domains (Sanh et al., 2020). However, identifying appropriate sparse pruning masks requires burdensome training of models with full weights, which can still be unaffordable for many practitioners. For example, fine-tuning a large pre-trained language model like GPT-3 for just one step consumes at least 1.2TB of VRAM and requires 96 pieces of NVIDIA Tesla (Hu et al., 2021).
One parallel alternative is designing parameter-efficient fine-tuning algorithms, which aims to only optimize a small portion of weights while fixing most of the pre-trained weights during the downstream task training step. Pioneering works along this line, which utilize adapters (Houlsby et al., 2019) or low-rank decomposition (Hu et al., 2021), can significantly reduce the number of trainable parameters while preserving good fine-tuning performance. Introducing only a small number of task-specific parameters for each new downstream task can substantially improve the memory and deployment efficiency of models as it allows us to reuse the remaining unchanged/shared parameters. However, there are two major hurdles of current parameter-efficient fine-tuning: (i) it does not yield any inference efficiency gains since the full pre-trained weights are still required to calculate outputs; and (ii) current methods assume the weight update to be either sparse (Guo et al., 2020) or low-rank (Hu et al., 2021), yet those assumptions might be oversimplified and overly restricted to allow for effective updates. For example, the low-rank assumptions on weight matrices might be overly strong since some weights cannot be fitted in the low-rank space (Yu et al., 2017). Recently Chen et al. (2021a) also find that using both sparse and low-rank components performs better than either of them. These observations have inspired us to explore better parameter-efficiency methods.
To tackle both resource- and parameter-efficiency issues of large model fine-tuning, we explicitly draw on the prior of sparsity for both weight updates and the final weights, and establish a dually sparsity-embedding efficient tuning (DSEE) framework. From a pre-trained model, DSEE first adopts a sparsity-aware low-rank weight update to achieve parameter efficiency of the fine-tuning process; and then enforces a sparse weight structure by masking to achieve resource efficiency of the fine-tuned model at inference time. Our contributions can be summarized as follows:
• We propose the dually sparsity-embedding efficient tuning (DSEE), which unifies sparsityaware weight update and sparse pretrained weight in fine-tuning gigantic pre-trained models. It is the first attempt towards jointly optimizing both parameter efficiency of the finetuning process, and the resource efficiency of the fine-tuned model.
• We exploit both unstructured and structured sparse patterns in the DSEE framework. For weight updates, we find the injected sparsity prior to greatly enhance existing parameterefficient update schemes, and to further trim down the needed amount of trainable parameters. For the final weights, we learn well-performed sparse masks from the pre-trained weights, leading to substantial computation and memory reductions at inference.
• Extensive experiments demonstrate the effectiveness of our proposal across various representative pre-trained languages models, such as BERT, GPT-2, and DeBERTa; and on diverse evaluation benchmarks, such as E2E, DART, WebNLG, and GLUE. Specifically, on (E2E, Dart, WebNLG), our methods can achieve a BLUE score of (69.75, 55.40, 46.66) with less than 1% of total trainable parameters. On BERT, our method can save about 35% FLOPs, compared to traditional downstream fine-tuning.
2 RELATED WORK
Pre-trained language models. Transformer (Vaswani et al., 2017) is a sequence-to-sequence model that heavily uses the concept of self-attention. Later on, numerous transformer-based models have been proposed and show overwhelming performance on natural language processing (NLP) and on computer vision (CV) tasks. Devlin et al. (2019) designed BERT that pre-train deep bidirectional representations from unlabeled text and reached powerful performance. Liu et al. (2019) found that BERT were terribly undertrained and proposed RoBERTa, an enhanced training recipe for BERT which can greatly boost the performance. He et al. (2020) proposed decoding-enhanced BERT with disentangled attention (DeBERTa) that incorporates the disentangled attention mechanism and an improved mask encoder to enhance BERT and RoBERTa. More variants like XL-Net, Albert, and Electra have also been proposed in recent years (Yang et al., 2019; Lan et al., 2019; Clark et al., 2019). The series of GPT models (Radford et al., 2019; Brown et al., 2020) are later developed based on transformers decoder blocks rather than encoder blocks like BERT, which again have
shown superior performance on different tasks. These large models pretrained on a large amount of unlabelled texts would need to be further fine-tuned on downstream tasks for better performance. One of the accompanying disadvantages of these pre-training models with tremendous parameter counts (e.g., 175B in GPT-3) is the unaffordable computational cost for further fine-tuning.
Pruning and Low-rank decomposition. Pruning is a widely-used model compression technique. It can reduce the number of parameters inside models, which possibly brings training and inference efficiency. Along with weight pruning method (Han et al., 2015b) being one of the most effective methods (Gordon et al., 2020), various criterion have been proposed to select insignificant weights for pruning, such as Taylor approximation (Molchanov et al., 2019), Hessian score approximation (Hassibi & Stork, 1993), and other saliency scores such as SNIP (Lee et al., 2018), GraSP (Wang et al., 2019) and SynFlow (Tanaka et al., 2020). Several pruning methods have been commonly adapted to compress language models (McCarley et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2020; Chen et al., 2021b). Specifically, McCarley et al. (2019) proposed to prune attention heads that had less contribution to the model. Wang et al. (2020) pruned BERT models by involving low-rank factorization and `0 regularization. Sanh et al. (2020) invented an improved version of magnitude pruning (i.e., pruning based on the weight change) that can better suit the transfer learning. Chen et al. (2021b) performed structured pruning on BERT via `1 sparse regularization, which reduced a large portion of parameters and decreased the training cost.
Low-rank approximation (Ye, 2005) is also vastly studied. One classical scenario is robust principle component analysis (Candès et al., 2011), which decomposes a matrix into a low-rank plus a sparse component. Existing literature shows that in deep learning, the learned over-parameterized models often naturally bear approximate low-rank weight structures (Oymak et al., 2019; Yu et al., 2017). Some (Jaderberg et al., 2014; Povey et al., 2018; Sainath et al., 2013; Zhang et al., 2014; Zhao et al., 2016) have explicitly imposed the low-rank constraint during training. Wang et al. (2020); Hu et al. (2021) utilized low-rank decomposition to shrink the model size and trim down the trainable parameters during fine-tuning. However, to our best knowledge, integrating sparsity and low-rank structures has never been studied before for efficient fine-tuning of pre-trained language models.
Parameter-efficient adaptation. Parameter-efficient adaptation aims at reducing the number of trainable parameters when fine-tuning the models across different downstream domains. Unlike pruning, it generates updates that can be represented by fewer parameters instead of building sparse models. Various approaches are invented to achieve the goal. Rebuffi et al. (2017); Houlsby et al. (2019) inserted and only trained adapters between existing layers, whose parameters are much less compared to the pretrained models. Guo et al. (2020) leveraged `0 regularization to limit the number of non-zero elements in the update vectors. Lester et al. (2021); Li & Liang (2021) introduced efficient prompt tuning which optimizes only a small continuous task-specific vector. Hu et al. (2021) proposed a low-rank decomposition-based method that can also significantly reduce the number of trainable parameters. However, fine-tuned models yielded by these methods work have the same amount of weights as the pre-trained starting point; hence they contribute no resource efficiency of the final model.
3 METHODOLOGY
In this section, we begin by describing our notations and definitions of sparsity generation and parameter-efficient fine-tuning in Section 3.1. Then, we introduce the (dually) sparsity-embedded efficient fine-tuning algorithms in Sections 3.2 and 3.3.
3.1 PRELIMINARIES
Sparsity generation and resource-efficient fine-tuning. We adopt both unstructured and structured pruning methods to produce sparsity. They can lead to resource-efficiency including memory and computation savings.
Let W ∈ Rm×n denote a weight matrix. The goal of pruning is to find a binary mask S ∈ {0, 1}‖W‖0 , where ‖W‖0 is the number of parameters in W . The mask S is applied to W and results in a sparse weight W S. For unstructured pruning, only memory cost is saved; but for structured pruning, it helps save computational cost since the sparse weights can be smaller in size by wiping out all-zero columns or rows. However, the performance of networks after structured pruning is often shown to be inferior compared with the unstructured pruning counterpart.
Parameter-efficient fine-tuning. To leverage the knowledge in pre-trained weights W , downstream models learn task-specific weight update ∆W via fine-tuning and generate predictions with weights W + ∆W . The output of models is therefore calculated as (W + ∆W)x where x is the input. Since ∆W has the same size of W , learning the update matrices usually requires massive resources as the size of the pre-trained model increases. Parameter-efficient fine-tuning try to solve this problem by using as few trainable parameters as possible to represent ∆W , while maintaining competitive downstream fine-tuning performance. Previous literature reaches the goal via either sparsifying weight update matrices ∆W (Guo et al., 2020) or leveraging low-rank decomposed matrices to compute ∆W (Hu et al., 2021), while in our work we combine both of them.
Algorithm 1: Sparsity-Embedded LowRank Decomposition Input: Pretrained weightsW , number of non-zero elements N Output: Sparse matrices S2 1 Initialize S2 to be an empty set. 2 for each self-attention projection weights wi inW do /* Decomposition */ 3 Perform matrix decomposition: wi ≈ UV + S ′ by solving the optimization problem in Eqn.1. /* Identify important elements to form S2 */ 4 Perform thresholding on S ′: Keep N
elements in S ′ with top magnitudes, and set the rest 0.
5 Append S ′ into S2. 6 end
Algorithm 2: DSEE Input: Pretrained weightsW , number of
non-zero elements N , desired sparsity s, loss function L
Output: Sparse mask S1, matrices U ,V,S2 1 DecomposeW into U ,V and S2 2 (Re-)Initialization: U = 0,V ∼ N (0, 0.02),Ω = indexes of non-zero elements in S2,S = 0 /* I: train before pruning */ 3 Train U ,V,S with respect to L under the constraint of PΩC (S) = 0. /* II: pruning the model */ 4 Prune (1-s%) parameters inW globally by sorting the magnitude ofW + UV + S, deriving the sparsity mask S1 /* III: tuning after pruning */ 5 Tune U ,V,S2 for E epochs for recovering the performance.
3.2 SPARSITY-EMBEDDED PARAMETER-EFFICIENT FINE-TUNING
A recent study (Hu et al., 2021) enforces low-rank constraint to weight update tensors ∆W , and obtains a satisfactory trade-off between parameter-efficiency and model quality. However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside the low-rank subspace, creating sparse “residuals”. Inspired by these observations, we investigate a new sparsity-aware low-rank subspace of ∆W , and introduce the first component of our proposal in Figure 1, i.e., sparsity-embedded parameter-efficient fine-tuning.
Specifically, we identify a sparse subspace Ω, that we can project our update matrices ∆W to. The update matrix is then decomposed into two components: a low-rank part which is represented by the multiplication of two low-rank matrices U ∈ Rm×r and V ∈ Rr×n, and a sparse residual
S2 = PΩ(∆W), where PΩ(∆W) =
{ si,j , (i, j) ∈ Ω
0, (i, j) ∈ ΩC , i = 1, 2, . . . ,m, j = 1, 2, . . . , n.
As illustrated in Figure 1, the update matrix ∆W can be approximated by UV + S2, in which the U , V , and the non-zero element of S2 are learnable parameters while Ω is fixed once determined. Compared to the full fine-tuning schemes has m × n individual trainable parameters, our method only has (m+n)×r+card(Ω). If r is smaller than m×n−card(Ω)m+n / 0.5 min{m,n}, which is a loose bound, our method is capable of substantially reducing trainable parameters for downstream finetuning. In practice, the value of r is very small compared tom and n so the savings are considerable.
The next question is how to find appropriate Ω. Motivated by (Candès et al., 2011), we formulate our sparsity-aware decomposition as the following optimization problem:
min U,V,S2
1 2 ‖W − UV − S2‖2F
s.t. rank(U) ≤ r, rank(V) ≤ r, card(S2) ≤ c, (1)
where rank(·) indicates the rank and card(·) indicates the cardinality of a matrix. Here we derive Ω from pre-trained weights W , which is different from previous low-rank techniques for model compression since we perform prior-training decomposition and can not access ∆W before finetuning. In this way, we actually assume that ∆W shares a similar crucial subspace Ω with W (Sun et al., 2018). We use the GreBsmo algorithm (Zhou & Tao, 2013) to solve this optimization problem. This method adopts a similar idea as random projection and can solve the optimization problem rapidly. The details of the algorithm are in Appendix.
Algorithm 1 summarizes the detailed procedure of our proposal. We first decompose the pretrained weightsW into three matrices: U , V and S2. U and V are matrices of low-rank and S2 is a sparse matrix. After the decomposition, we re-initialize U and V to be of shape Rm×r and Rr×n. The initial values for U are set to 0, and the initial values for V will follow a normal distribution as Hu et al. (2021) did. We do not use the decomposed value inside S2, but only the index of non-zero elements. We collect the indexes of non-zero elements into Ω, and reset the values of S2 back to 0. During the training, we only update the elements of S2 whose indexes are in Ω, as well as U and V .
3.3 DUALLY SPARSITY-EMBEDDED EFFICIENT TUNING (DSEE)
Besides parameter-efficiency, resource-efficiency is another major goal of our proposed framework. Sparsity-embedded low-rank weight updates alone do not necessarily lead to computational cost reductions since it works together with the dense pre-trained weights and the massive computation during forwarding processes is inevitable. To achieve both resource- and parameter-efficiency, we also promote a sparse weight structure to the final fine-tuned model, as demonstrated by the sparse mask S1 in Figure 1. Our DSEE explores both unstructured/structured sparsity patterns as follows. B Pruning with unstructured sparse masks. The unstructured sparse mask is the most widely used mask since it usually produces almost undamaged performance compared to its dense counterpart (Han et al., 2015a). It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016). Specifically, based on the weight matrixW , a sparse mask S1 can be calculated with various approaches, either heuristic or optimization-based. We adopt the one-shot magnitude pruning (Han et al., 2015a) in DSEE, due to its simplicity and competitive performance.
In our context, we create the sparse pattern S1 from W + UV: First, it sorts the magnitude (i.e., absolute value) of individual weights and removes parameters with bottom 1 − k% magnitude. Second, we further tune the value of the low-rank update matrices for a few epochs, which is important to keep the performance as stated in Han et al. (2015b). Third, the sparse mask S1 that we derive will be applied on the pretrained weights only and do not affect the output from the update matrices, as W S1 + UV . For an input sample x, the output is calculated by (W S1 + UV + S2)x = W S1x + (UV + S2)x, where the first term W S1x brings significant resource-efficiency, and the second term brings parameter-efficiency.
B Pruning with structured sparse masks. The second method exploits structured sparse masks, whose regular patterns are more hardware friendly yet have worse performance than unstructured masks. Our DSEE considers `1 sparse regularization (Liu et al., 2017; Chen et al., 2021b) to craft high-quality structurally pruned subnetworks. More precisely, motivated by Chen et al. (2021b), we introduce learnable coefficients ξ before each attention head module, and append a `1 sparse penalty on ξ to the original loss. Detailed formulations are depicted as below:
We add trainable coefficients c before attention heads. The parameters c are optimized together with the decomposed matrices, i.e., U ,V and S2. An extra term λ‖c‖1 will be added to the training loss for sparse regularization. The value of λ is set to 1e-4. After training, we prune the attention heads that have the lowest contribution to the model (i.e., lowest c). We use a layer-wise pruning scheme that prunes the same proportion of heads in each attention layer. After pruning, several epochs of tuning are run to recover the performance of the pruned model. The size of update matrices will change after structured pruning, so we also need to change the dimension of U and V . Specifically, we change the size of V from Rr×n to Rr×[n×s] where s is the pruning ratio of the corresponding self-attention layer and [x] is the biggest integer not greater than x. The size of S2 is shrunk accordingly.
4 EXPERIMENT RESULTS
Datasets and models. We use three classical pre-trained language models in our experiments: BERTBASE (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and DeBERTa-large (He et al., 2020), which have 12/24/24 layers with hidden size of 768/1024/1024 and 110/354/380M trainable parameters, respectively. For BERT and DeBERTa, we evaluate our method on the GLUE benchmarks (Wang et al., 2018). For GPT-2, we use E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Nan et al., 2021) for evaluations.
Training and evaluation details. For BERT and DeBERTa, we follow the default settings in Wolf et al. (2019); Devlin et al. (2019). We use the AdamW (Loshchilov & Hutter, 2017) optimizer for downstream training. The batch size for BERT and DeBERTa is 32, 8 per GPU. We train three epochs to search the sparse mask S1, and continue to tune the model for three epochs to converge. The initial learning rates are reported in A1, and we linearly decay them. For GPT-2, we follow the same hyperparameters as in Hu et al. (2021). We train the model for five epochs to search for the mask, and further tune the pruned model for two epochs.
Evaluation Metrics. For tasks on BERT and DeBERTa, we use the accuracy, Matthew’s Correlation, and Pearson’s r in the evaluation by default, which is also the conventional setting in the NLP community. On GPT-2, we use BLEU (Papineni et al., 2002), METEOR (Denkowski & Lavie, 2014), TER (Snover et al., 2006) and NIST (Doddington, 2002) as our evaluation metrics. To evaluate the efficiency of models, we use the number of trainable parameters to measure the parameter efficiency, use Sparsity in Pretrained Weights to measure the resource efficiency, and FLOPs to evaluate the computational efficiency. We add a star sign (∗) to indicate the structured sparsity (i.e., structurally prune the pretrained weights).
Baselines. On BERT and DeBERTa, we conduct comparisons with the following baseline methods: ¶ Fine-tune: directly fine-tunes the full model; · EarlyBERT (Chen et al., 2021b); ¸ BERT Tickets (Chen et al., 2020); ¹ OMP: prunes the fine-tuned weights by magnitude, and fine-tune afterwards; and º LoRA (Hu et al., 2021). We also report the Huggingface’s fine-tuning results.
On GPT-2, we conduct comprehensive comparisons with multiple baseline methods: ¶ Adapters (Houlsby et al., 2019): insert adapters after linear layers; · FT-Top2: fine-tune the top 2 layers only; ¸: Prefix: prefix tuning introduced by Li & Liang (2021); and ¹ LoRA: low-rank decomposition, which assumes ∆W = UV; most results are directly cited from Hu et al. (2021).
4.1 EFFICIENT TUNING WITH DSEE
Parameter-efficiency with sparse masks. To verify that using simple low-rank adaptation (i.e., LoRA) has limitations, we compare its performance with the performance of our sparsity-embedded efficient fine-tuning. Table 1 proves that on MNLI, SST-2, CoLA, and STS-B, the simple decomposition form shows unsatisfactory results compared to sparsity-embedded decomposition. Adding a small proportion of parameters (only 3072 trainable parameters) can bring a performance boost to the model. Specifically, under
rank 8, the metrics increase (0.69/0.13/0.008/0.003) on SST-2/MNLI/CoLA/STS-B, respectively. Moreover, using only approximately half of the number of trainable parameters, our method can achieve comparable parameters compared to the state-of-the-art method LoRA.
Table 2 shows the performance on GPT-2. For all three tasks, our method can achieve comparable results with only about half of the trainable parameters with LoRA with rank four and make substantial performance gain compared to LoRA with rank two. On WebNLG, our method even achieves a higher
BLEU score, 55.56 versus 55.29, with half of the trainable parameters. Such a phenomenon indicates that adding a sparse matrix to the low-rank decomposition can make substantial improvement.
Resource- and parameter-efficiency with sparse masks. We report the performance of DSEE, including both the number of trainable parameters and the sparsity in pretrained weights. We use an unstructured sparsity of 50%, and structured sparsity of 25% and 33%, which is equal to pruning 1/4 and 1/3 heads from each attention layer. For BERTBASE and DeBERTa-large, we set the r (the low-rank dimension) to be 16, and N (the number of non-zero elements in S2 ) to be 64. We also prune each of the intermediate layers using a structured sparsity of 40% as in (Chen et al., 2021b). For GPT-2, we set r to be 2 in DSEE and 4 in LoRA. The choice of N remains 64.
Table 3 summarizes the performance on BERTBASE. On BERTBASE, our DSEE method can: ¶ achieve parameter-efficiency at high level and retain the performance on various downstream tasks. On BERTBASE, with only about 600K trainable parameters (110/0.5929 ≈ 200× smaller than the full model), DSEE can achieve comparable performance on all GLUE tasks. · DSEE can achieve resource-efficiency in the final models. Using either unstructured or structured sparse masks can reach our goal of reducing the number of parameters in the pretrained weights without sacrificing much performance. At the sparsity level of 50%, unstructured DSEE can achieve an accuracy of 90.46% on QNLI, surpassing the baseline of fine-tuning, whose result is 90.15%. At the sparsity level of 25%, the structured DSEE can have performance gains ranging from 0.46% to 2.22%, except for QQP and QNLI. At the sparsity level of 33%, the performance gains range from 0.11% to 2.04%, except for QQP and QNLI. ¸ Compared to other baselines, e.g., EarlyBERT and BERT Tickets, our method benefits from parameter efficiency, which updates the weights more efficiently. These observations validate the effectiveness of our DSEE method.
We also calculate the inference FLOPs of BERTBASE on STS-B dataset. The original BERTBASE on STS-B takes FLOPs of 3.7835 × 1014, while LoRA takes FLOPs of 3.8096 × 1014, which is 0.69% higher. Conversely, the structured version of DSEE takes a FLOPs of 2.4921 × 1014 at the sparsity of 25%, and 2.3867× 1014 at the sparsity of 33%, which is 34.61% and 37.38% lower than LoRA. This indicates that a large proportion of computational cost can be saved at inference phase.
Table 4 summarizes the performance on GPT-2, which shares a similar trend. Unstructured DSEE can achieve 2000× reduction in trainable parameters and 2× reduction in the final fine-tuned model size with almost no loss in downstream task performance compared to conventional finetuning. When compared to LoRA, the unstructured DSEE can retain the same level of the number of trainable parameters and downstream task performance, while showing a 2× reduction in the final finetuned model size. The structured DSEE on GPT-2 seems to be less competitive than on BERT, but it can still hold performance on E2E and WebNLG after pruning 25% of heads in attention modules.
Finally, we validate our method on DeBERTa. The results are displayed in Table 5. We use four datasets, CoLA, MNLI, MRPC, and RTE. They have greatly varied sizes, which are representatives of the GLUE benchmark. DeBERTa is a larger model, so applying low-rank decomposition with our
hyperparameters cannot match the performance of fine-tuning. Compared to LoRA, our method reaches higher performance on downstream tasks, albeit it needs slightly more training epochs. However, such a slight extra cost for searching the sparse mask S1 can be amortized by the efficiency gained in the future inference, since the burden on resources such as storing the pretrained weights is relieved.
4.2 UNDERSTANDING DSEE
The position of embedded sparsity masks plays a crucial role in our proposed DSEE. For instance, applying sparse masks to the pre-trained weights or weight updates produces resourceand parameter-efficiency. Precisely, we compare four different methods on BERTBASE:
one-shot magnitude weight pruning (W S1), two DSEE’s variants (W S1 + UV) and W + UV + S2), DSEE (W S1 + UV + S2). The results are collected in Table 6. We can see that: ¶ NO embedded sparsity in the pretrained weights yields the overall best performance. This is intuitive since the valuable knowledge learned from massive pretraining data is intact and not removed. · Embedding sparsity into the pretrained weights harms only little to no performance. A similar conclusion is drawn by Chen et al. (2020) which validated that a pruned model can have a similar performance as the dense model. ¸ Using sparsity-embed efficient fine-tuning with the sparse pre-trained weights can also preserve performance, as well as achieve parameter efficiency.
4.3 ABLATION AND VISUALIZATION
Different methods to generate sparse matrix S2 In this section, we compare the performance of BERTBASE using different methods to generate the sparse matrix S2 and the corresponding space Ω. Except for the aforementioned matrix decomposition method, we try to (1) randomly sample indexes into Ω; and (2) directly select the indexes of elements of highest magnitude ofW into Ω. In Figure 2 we can see that using the matrix decomposition method has the highest metric overall. It is also noteworthy that sparse matrices generated by other methods will sometimes harm the performance.
Another factor is the number of non-zero elements inside S2. More non-zero elements in S2 reduce the parameter efficiency, while less non-zero elements increase the efficiency but may not be much beneficial to the accuracy or other metrics. Figure 2 also shows the relationship between number
of non-zero elements in S2 and the performance on fine-tuned model on SST-2. From the graph, we can see that using an N of 64 seems to have the most stable results compared to other choices. Another important piece of information we can derive from the figure is that a higher number of non-zero elements in the S2 does not guarantee better performance.
Ablation of # ranks. The rank of lowrank decomposition r is crucial to the transfer learning performance. A small r will result in lower representation ability, and a large r will bring more parameters and reduce the parameter efficiency. To find the best value, we conduct experiments with different ranks r on four datasets, SST-2, MNLI, CoLA, and STSB. The results are displayed in Figure 3. We add quadratic trend lines in the graph along with the discrete points to smooth
the results. We can draw two conclusions from the graphs: ¶ Overall, the final performance is positively correlated with the number of trainable parameters; however, on some tasks (MNLI, CoLA) higher number of parameters (namely, the r for U and V) will lead to lower performance. · With a sparse matrix embedded, the performance of the trained models can be improved within a range of trainable parameters. On SST-2 and CoLA, the applicable range seems to be 104.5 ∼ 106. On STS-B, our method can consistently outperform the performance of using low-rank decomposition only. For MNLI, the two methods behave similarly while our method is slightly better.
Ablation of sparsity. Although creating sparsity in pretrained weights W does not change the number of trainable parameters of models, the different levels of sparsity control the number of non-zero elements and thereby influence the representation ability. Therefore, we conduct experiments on BERT with different sparsity, ranging from 10% to 60%. The results are in Figure A5. From the figures, we can draw conclusions: ¶ DSEE out-performs magnitude pruning at low sparsity (< 50%) with respect to the performance on down-
stream tasks. · DSEE surpasses vanilla magnitude pruning with respect to the number of parameters. For each self-attention layer, using vanilla magnitude pruning needs to train all the weights, while DSEE only needs to update the three low-rank matrices.
Visualizations. Figure 4 shows the distribution of weight change. From the graph, we can see that most weights are located around 0. Such a phenomenon indicates that a natural sparsity exists within the update matrices, motivating us to explore sparse structures along with matrix decomposition.
5 CONCLUSION
This paper draws on the prior of sparsity and establishes the DSEE framework. It is the first attempt towards jointly optimizing both parameter efficiency of the fine-tuning process, and the resource efficiency of the fine-tuned model. On state-of-the-art large-scale language models (e.g., BERT, GPT, and DeBERTa) and across several of datasets, DSEE consistently demonstrates highly impressive parameter, training, and inference efficiency, in addition to preserving a competitive downstream transfer performance. Our future work targets extending DSEE to the finetuning of large-scale computer vision and/or multi-modal pre-trained models.
A1 MORE IMPLEMENTATION DETAILS
We report the setting of other hyperparameters of our experiments, such as learning rate, in Table A7. The device we used for experiments are various, including NVIDIA GeForce GTX 1080 Ti, GeForce RTX 2080 Ti, Titan RTX, and A6000.
Architecture Method Parameters Dataset MNLI QNLI QQP SST-2 CoLA MRPC RTE STS-B
BERTBASE Fine-tune Learning Rate 5e-5 BERTBASE DSEE (before pruning) Learning Rate 5e-5 5e-5 5e-5 2e-4 1e-3 8e-4 1e-3 8e-4 BERTBASE DSEE (after pruning) Learning Rate 5e-5 5e-5 5e-5 5e-5 1e-3 5e-4 5e-4 5e-4
DeBERTa-large LoRA & DSEE (before pruning) Learning Rate 1e-5 - - - 1e-3 8e-4 8e-5 - DeBERTa-large DSEE (after pruning) Learning Rate 1e-5 - - - 5e-5 8e-4 6e-5 -
Table A7: Hyper-parameters we used on different datasets.
GreBsmo Algorithm GreBsmo (Zhou & Tao, 2013) is an algorithm for solving the Robust PCAlike methods. The optimization of U , V , and S follows the following iterative rules:
Uk = Q,QR ( (X − Sk−1)V Tk−1 ) = QR Vk = Q T (X − Sk−1)
Sk = Sλ (X − UkVk) , (2)
where X is the original dense matrix, QR(·) means the QR decomposition, Sλ(·) indicates the soft-threshold operation, and the subscripts k indicates the optimization step.
A2 MORE EXPERIMENT RESULTS
Ablation of Sparsity To study the relationship between sparsity of unstructured pruning and the behavior of our DSEE, we conduct an ablation study on various datasets in GLUE benchmarks. The results are shown in Figure A5
0.880
0.885
0.890
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE0.880
0.885
0.890
20 40 60 Sparsity
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE
ST SB
89.5
90.0
90.5
91.0
91.5
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
89.5
90.0
90.5
91.0
91.5
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
SS T2
83
84
85
86
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
83
84
85
86
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M R PC
67 68 69 70 71 72 73
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
67
68
69
70
71
72
73
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
R T E
80
81
82
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
80
81
82
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M N L I
0.52
0.56
0.60
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
C or
re la
tio n
Methods Magnitude Pruning DSEE0.52
0.56
0.60
20 40 60 Sparsity
C or
re la
tio n
Methods Magnitude Pruning DSEE
C oL A
Figure A5: DSEE performance compared to vanilla magnitude pruning at different sparsity. Magnitude Pruning: vanilla magnitude pruning which tunes W directly.
Results with more runs. We conduct experiments with three runs. The mean and standard deviation of accuracy of LoRA and DSEE (without pruning) are shown in Table A8. We have also included the p-value of t-tests to show the significance of the performance gaps. On half of the
A14
datasets, the p-value is around 0.1; and on three datasets the p-value is between 0.2 and 0.4. The reason why p-value is large on some datasets is probably because the number of experiments still has room to increase.
Table A8: Performance comparison of different methods on BERTBASE on GLUE benchmarks. The first row of each method indicates the mean accuracy, and the second row indicates the standard deviation.
Methods # Trainable Sparsity in DatasetParameters Pretrained Weights CoLA STS-B MNLI QQP QNLI MRPC RTE SST-2
LoRA 589.8K 0% 57.38 88.81 80.88 86.45 88.49 86.03 71.00 92.171.43 0.23 0.41 0.14 0.10 1.49 1.10 0.27
DSEE 592.9K 0% 58.73 89.16 81.03 86.53 88.58 85.78 71.12 92.550.95 0.03 0.62 0.06 0.17 1.07 0.62 0.12
P-value of matched pair t-tests 0.086 0.054 0.383 0.121 0.276 0.572 0.277 0.031
DSEE 592.9K 50% 55.60 88.39 81.16 87.23 88.88 84.88 70.09 91.000.81 0.21 0.23 0.03 0.30 0.93 1.37 0.13
Inference time on BERTBASE We have recorded the inference time of BERTBASE on various GLUE benchmarks, QQP, STS-B, CoLA and RTE. The number of samples in their evaluation set range from 277 to 40430, so they can be considered representative. The results are shown in Table A9. From the table we can see that, the inference time are greatly saved after using the structured version of DSEE.
Table A9: Inference time of different methods on BERTBASE on GLUE benchmarks. The sparsity with star signs indicates that it is a structured sparsity.
Dataset # eval samples MethodsLoRA DSEE (25%∗) DSEE (33%∗)
QQP 40,430 50.773 36.391 34.946 STS-B 1,500 2.033 1.506 1.337 CoLA 1,043 1.316 0.931 0.877 RTE 277 0.464 0.420 0.376
Convergence Speed We have compared the convergence speed (i.e., how the test accuracy changes) of LoRA and DSEE. From Figure A6 we can see that the convergence speeds of the two methods are not significantly different. This means introducing a sparse component into the decomposition form does not affect the convergence dynamics.
0.75
0.80
0.85
0.90
0 50 100 Evaluation Step
Te st
A cc
ur ac
y
Method
DSEE
LoRA
Figure A6: Convergence speed of two methods (LoRA and DSEE). The x-axis represents evaluation steps and y-axis represents the test accuracy.
A15 | 1. What is the focus and contribution of the paper on efficient tuning?
2. What are the strengths of the proposed approach, particularly in terms of its impact on inference FLOPs savings and trainable parameters?
3. What are the weaknesses of the paper regarding its language, presentation, experiment validation, and technical novelty?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a Dually Sparsity-Embedded Efficient Tuning (DSEE) which enforces sporty-aware weight updates on top of the pertained weights, and encourages a sparse weight structure towards the final fine-tuned model. On a diverse set of backbones and datasets, DSEE achieves 35% inference FLOPs savings with <0.1% trainable parameters without loss of accuracy.
Review
Strength:
Developing more parameter-efficient and resource-efficient NLP models is important
Experiments show promising results
Weakness:
Language and presentation of the paper can be improved. For example, after reading the introduction, it is still quite confused for me to understand the method and motivation because of confusing word usages — what exactly is the relationship and difference between “sparsity-aware weight update” and “sparse final weight”?
Experimental results can be better validated. Compared to LoRA, they claim the advantage of DSEE is that the “burden on resources such as storing the pertained weights are relieved”. To further validate this claim, it will be insightful to also compare with LoRA applied on sparse pertained model (i.e. first apply pruning on pertained model, then apply LoRA). There is some discussion on FLOPs in e.g. section 4.1, but it will be more clear to have a complete table on training/inference time and memory.
Technical novelty is somewhat limited, or at least can be better justified. |
ICLR | Title
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Abstract
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many finetuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resourceand parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning by enforcing sparsity-aware weight updates on top of the pretrained weights; and (ii) resource-efficient inference by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and `1 sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about 35% inference FLOPs savings with < 1% trainable parameters and comparable performance to conventional fine-tuning. 1
1 INTRODUCTION
Most recent NLP applications have been following the pre-train then fine-tune paradigm, where we start from a gigantic pre-trained model and fine-tune it towards downstream tasks. Conventional fine-tuning works by updating all of the parameters of the pre-trained model. As the size of pretrained models grows, updating all parameters becomes less feasible in most practical scenarios, due to the expensive memory and computational requirements. For example, BERTBASE (Devlin et al., 2019) has 110M trainable parameters, while GPT-2 (Radford et al., 2019) has up to 1.5B and the largest version of GPT-3 (Radford et al., 2019) has an astonishing 175B trainable parameters. As such, conventional fine-tuning of the larger models could require hundreds of GPU hours. Another downside of this paradigm is that it requires storing as many parameters, as in the large-scale pretrained models, for each downstream task. That poses impediments to the deployment in real-world resource-constrained environments.
One solution to address the extensive resource requirement of conventional fine-tuning is model pruning (LeCun et al., 1990; Han et al., 2015a; Ren et al., 2018; He et al., 2017; Liu et al., 2017), where unnecessary weights are eliminated to shrink the model size. For example, Chen et al. (2021b) leverages `1 regularization to remove insignificant attention heads and gains 35% ∼ 45% training time with comparable performance. Guo et al. (2020) learns sparse task-specific “diff” vectors for various downstream fine-tuning tasks, leading to great memory savings. All these studies indicate the rise of sparsity naturally during fine-tuning a general-purpose pre-trained model,
1All codes are provided in the supplement.
to some specialized downstream functionality. One potential interpretation, of why sparsity arises, is that different subsets of the parameters may be responsible for different downstream tasks and data domains (Sanh et al., 2020). However, identifying appropriate sparse pruning masks requires burdensome training of models with full weights, which can still be unaffordable for many practitioners. For example, fine-tuning a large pre-trained language model like GPT-3 for just one step consumes at least 1.2TB of VRAM and requires 96 pieces of NVIDIA Tesla (Hu et al., 2021).
One parallel alternative is designing parameter-efficient fine-tuning algorithms, which aims to only optimize a small portion of weights while fixing most of the pre-trained weights during the downstream task training step. Pioneering works along this line, which utilize adapters (Houlsby et al., 2019) or low-rank decomposition (Hu et al., 2021), can significantly reduce the number of trainable parameters while preserving good fine-tuning performance. Introducing only a small number of task-specific parameters for each new downstream task can substantially improve the memory and deployment efficiency of models as it allows us to reuse the remaining unchanged/shared parameters. However, there are two major hurdles of current parameter-efficient fine-tuning: (i) it does not yield any inference efficiency gains since the full pre-trained weights are still required to calculate outputs; and (ii) current methods assume the weight update to be either sparse (Guo et al., 2020) or low-rank (Hu et al., 2021), yet those assumptions might be oversimplified and overly restricted to allow for effective updates. For example, the low-rank assumptions on weight matrices might be overly strong since some weights cannot be fitted in the low-rank space (Yu et al., 2017). Recently Chen et al. (2021a) also find that using both sparse and low-rank components performs better than either of them. These observations have inspired us to explore better parameter-efficiency methods.
To tackle both resource- and parameter-efficiency issues of large model fine-tuning, we explicitly draw on the prior of sparsity for both weight updates and the final weights, and establish a dually sparsity-embedding efficient tuning (DSEE) framework. From a pre-trained model, DSEE first adopts a sparsity-aware low-rank weight update to achieve parameter efficiency of the fine-tuning process; and then enforces a sparse weight structure by masking to achieve resource efficiency of the fine-tuned model at inference time. Our contributions can be summarized as follows:
• We propose the dually sparsity-embedding efficient tuning (DSEE), which unifies sparsityaware weight update and sparse pretrained weight in fine-tuning gigantic pre-trained models. It is the first attempt towards jointly optimizing both parameter efficiency of the finetuning process, and the resource efficiency of the fine-tuned model.
• We exploit both unstructured and structured sparse patterns in the DSEE framework. For weight updates, we find the injected sparsity prior to greatly enhance existing parameterefficient update schemes, and to further trim down the needed amount of trainable parameters. For the final weights, we learn well-performed sparse masks from the pre-trained weights, leading to substantial computation and memory reductions at inference.
• Extensive experiments demonstrate the effectiveness of our proposal across various representative pre-trained languages models, such as BERT, GPT-2, and DeBERTa; and on diverse evaluation benchmarks, such as E2E, DART, WebNLG, and GLUE. Specifically, on (E2E, Dart, WebNLG), our methods can achieve a BLUE score of (69.75, 55.40, 46.66) with less than 1% of total trainable parameters. On BERT, our method can save about 35% FLOPs, compared to traditional downstream fine-tuning.
2 RELATED WORK
Pre-trained language models. Transformer (Vaswani et al., 2017) is a sequence-to-sequence model that heavily uses the concept of self-attention. Later on, numerous transformer-based models have been proposed and show overwhelming performance on natural language processing (NLP) and on computer vision (CV) tasks. Devlin et al. (2019) designed BERT that pre-train deep bidirectional representations from unlabeled text and reached powerful performance. Liu et al. (2019) found that BERT were terribly undertrained and proposed RoBERTa, an enhanced training recipe for BERT which can greatly boost the performance. He et al. (2020) proposed decoding-enhanced BERT with disentangled attention (DeBERTa) that incorporates the disentangled attention mechanism and an improved mask encoder to enhance BERT and RoBERTa. More variants like XL-Net, Albert, and Electra have also been proposed in recent years (Yang et al., 2019; Lan et al., 2019; Clark et al., 2019). The series of GPT models (Radford et al., 2019; Brown et al., 2020) are later developed based on transformers decoder blocks rather than encoder blocks like BERT, which again have
shown superior performance on different tasks. These large models pretrained on a large amount of unlabelled texts would need to be further fine-tuned on downstream tasks for better performance. One of the accompanying disadvantages of these pre-training models with tremendous parameter counts (e.g., 175B in GPT-3) is the unaffordable computational cost for further fine-tuning.
Pruning and Low-rank decomposition. Pruning is a widely-used model compression technique. It can reduce the number of parameters inside models, which possibly brings training and inference efficiency. Along with weight pruning method (Han et al., 2015b) being one of the most effective methods (Gordon et al., 2020), various criterion have been proposed to select insignificant weights for pruning, such as Taylor approximation (Molchanov et al., 2019), Hessian score approximation (Hassibi & Stork, 1993), and other saliency scores such as SNIP (Lee et al., 2018), GraSP (Wang et al., 2019) and SynFlow (Tanaka et al., 2020). Several pruning methods have been commonly adapted to compress language models (McCarley et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2020; Chen et al., 2021b). Specifically, McCarley et al. (2019) proposed to prune attention heads that had less contribution to the model. Wang et al. (2020) pruned BERT models by involving low-rank factorization and `0 regularization. Sanh et al. (2020) invented an improved version of magnitude pruning (i.e., pruning based on the weight change) that can better suit the transfer learning. Chen et al. (2021b) performed structured pruning on BERT via `1 sparse regularization, which reduced a large portion of parameters and decreased the training cost.
Low-rank approximation (Ye, 2005) is also vastly studied. One classical scenario is robust principle component analysis (Candès et al., 2011), which decomposes a matrix into a low-rank plus a sparse component. Existing literature shows that in deep learning, the learned over-parameterized models often naturally bear approximate low-rank weight structures (Oymak et al., 2019; Yu et al., 2017). Some (Jaderberg et al., 2014; Povey et al., 2018; Sainath et al., 2013; Zhang et al., 2014; Zhao et al., 2016) have explicitly imposed the low-rank constraint during training. Wang et al. (2020); Hu et al. (2021) utilized low-rank decomposition to shrink the model size and trim down the trainable parameters during fine-tuning. However, to our best knowledge, integrating sparsity and low-rank structures has never been studied before for efficient fine-tuning of pre-trained language models.
Parameter-efficient adaptation. Parameter-efficient adaptation aims at reducing the number of trainable parameters when fine-tuning the models across different downstream domains. Unlike pruning, it generates updates that can be represented by fewer parameters instead of building sparse models. Various approaches are invented to achieve the goal. Rebuffi et al. (2017); Houlsby et al. (2019) inserted and only trained adapters between existing layers, whose parameters are much less compared to the pretrained models. Guo et al. (2020) leveraged `0 regularization to limit the number of non-zero elements in the update vectors. Lester et al. (2021); Li & Liang (2021) introduced efficient prompt tuning which optimizes only a small continuous task-specific vector. Hu et al. (2021) proposed a low-rank decomposition-based method that can also significantly reduce the number of trainable parameters. However, fine-tuned models yielded by these methods work have the same amount of weights as the pre-trained starting point; hence they contribute no resource efficiency of the final model.
3 METHODOLOGY
In this section, we begin by describing our notations and definitions of sparsity generation and parameter-efficient fine-tuning in Section 3.1. Then, we introduce the (dually) sparsity-embedded efficient fine-tuning algorithms in Sections 3.2 and 3.3.
3.1 PRELIMINARIES
Sparsity generation and resource-efficient fine-tuning. We adopt both unstructured and structured pruning methods to produce sparsity. They can lead to resource-efficiency including memory and computation savings.
Let W ∈ Rm×n denote a weight matrix. The goal of pruning is to find a binary mask S ∈ {0, 1}‖W‖0 , where ‖W‖0 is the number of parameters in W . The mask S is applied to W and results in a sparse weight W S. For unstructured pruning, only memory cost is saved; but for structured pruning, it helps save computational cost since the sparse weights can be smaller in size by wiping out all-zero columns or rows. However, the performance of networks after structured pruning is often shown to be inferior compared with the unstructured pruning counterpart.
Parameter-efficient fine-tuning. To leverage the knowledge in pre-trained weights W , downstream models learn task-specific weight update ∆W via fine-tuning and generate predictions with weights W + ∆W . The output of models is therefore calculated as (W + ∆W)x where x is the input. Since ∆W has the same size of W , learning the update matrices usually requires massive resources as the size of the pre-trained model increases. Parameter-efficient fine-tuning try to solve this problem by using as few trainable parameters as possible to represent ∆W , while maintaining competitive downstream fine-tuning performance. Previous literature reaches the goal via either sparsifying weight update matrices ∆W (Guo et al., 2020) or leveraging low-rank decomposed matrices to compute ∆W (Hu et al., 2021), while in our work we combine both of them.
Algorithm 1: Sparsity-Embedded LowRank Decomposition Input: Pretrained weightsW , number of non-zero elements N Output: Sparse matrices S2 1 Initialize S2 to be an empty set. 2 for each self-attention projection weights wi inW do /* Decomposition */ 3 Perform matrix decomposition: wi ≈ UV + S ′ by solving the optimization problem in Eqn.1. /* Identify important elements to form S2 */ 4 Perform thresholding on S ′: Keep N
elements in S ′ with top magnitudes, and set the rest 0.
5 Append S ′ into S2. 6 end
Algorithm 2: DSEE Input: Pretrained weightsW , number of
non-zero elements N , desired sparsity s, loss function L
Output: Sparse mask S1, matrices U ,V,S2 1 DecomposeW into U ,V and S2 2 (Re-)Initialization: U = 0,V ∼ N (0, 0.02),Ω = indexes of non-zero elements in S2,S = 0 /* I: train before pruning */ 3 Train U ,V,S with respect to L under the constraint of PΩC (S) = 0. /* II: pruning the model */ 4 Prune (1-s%) parameters inW globally by sorting the magnitude ofW + UV + S, deriving the sparsity mask S1 /* III: tuning after pruning */ 5 Tune U ,V,S2 for E epochs for recovering the performance.
3.2 SPARSITY-EMBEDDED PARAMETER-EFFICIENT FINE-TUNING
A recent study (Hu et al., 2021) enforces low-rank constraint to weight update tensors ∆W , and obtains a satisfactory trade-off between parameter-efficiency and model quality. However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside the low-rank subspace, creating sparse “residuals”. Inspired by these observations, we investigate a new sparsity-aware low-rank subspace of ∆W , and introduce the first component of our proposal in Figure 1, i.e., sparsity-embedded parameter-efficient fine-tuning.
Specifically, we identify a sparse subspace Ω, that we can project our update matrices ∆W to. The update matrix is then decomposed into two components: a low-rank part which is represented by the multiplication of two low-rank matrices U ∈ Rm×r and V ∈ Rr×n, and a sparse residual
S2 = PΩ(∆W), where PΩ(∆W) =
{ si,j , (i, j) ∈ Ω
0, (i, j) ∈ ΩC , i = 1, 2, . . . ,m, j = 1, 2, . . . , n.
As illustrated in Figure 1, the update matrix ∆W can be approximated by UV + S2, in which the U , V , and the non-zero element of S2 are learnable parameters while Ω is fixed once determined. Compared to the full fine-tuning schemes has m × n individual trainable parameters, our method only has (m+n)×r+card(Ω). If r is smaller than m×n−card(Ω)m+n / 0.5 min{m,n}, which is a loose bound, our method is capable of substantially reducing trainable parameters for downstream finetuning. In practice, the value of r is very small compared tom and n so the savings are considerable.
The next question is how to find appropriate Ω. Motivated by (Candès et al., 2011), we formulate our sparsity-aware decomposition as the following optimization problem:
min U,V,S2
1 2 ‖W − UV − S2‖2F
s.t. rank(U) ≤ r, rank(V) ≤ r, card(S2) ≤ c, (1)
where rank(·) indicates the rank and card(·) indicates the cardinality of a matrix. Here we derive Ω from pre-trained weights W , which is different from previous low-rank techniques for model compression since we perform prior-training decomposition and can not access ∆W before finetuning. In this way, we actually assume that ∆W shares a similar crucial subspace Ω with W (Sun et al., 2018). We use the GreBsmo algorithm (Zhou & Tao, 2013) to solve this optimization problem. This method adopts a similar idea as random projection and can solve the optimization problem rapidly. The details of the algorithm are in Appendix.
Algorithm 1 summarizes the detailed procedure of our proposal. We first decompose the pretrained weightsW into three matrices: U , V and S2. U and V are matrices of low-rank and S2 is a sparse matrix. After the decomposition, we re-initialize U and V to be of shape Rm×r and Rr×n. The initial values for U are set to 0, and the initial values for V will follow a normal distribution as Hu et al. (2021) did. We do not use the decomposed value inside S2, but only the index of non-zero elements. We collect the indexes of non-zero elements into Ω, and reset the values of S2 back to 0. During the training, we only update the elements of S2 whose indexes are in Ω, as well as U and V .
3.3 DUALLY SPARSITY-EMBEDDED EFFICIENT TUNING (DSEE)
Besides parameter-efficiency, resource-efficiency is another major goal of our proposed framework. Sparsity-embedded low-rank weight updates alone do not necessarily lead to computational cost reductions since it works together with the dense pre-trained weights and the massive computation during forwarding processes is inevitable. To achieve both resource- and parameter-efficiency, we also promote a sparse weight structure to the final fine-tuned model, as demonstrated by the sparse mask S1 in Figure 1. Our DSEE explores both unstructured/structured sparsity patterns as follows. B Pruning with unstructured sparse masks. The unstructured sparse mask is the most widely used mask since it usually produces almost undamaged performance compared to its dense counterpart (Han et al., 2015a). It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016). Specifically, based on the weight matrixW , a sparse mask S1 can be calculated with various approaches, either heuristic or optimization-based. We adopt the one-shot magnitude pruning (Han et al., 2015a) in DSEE, due to its simplicity and competitive performance.
In our context, we create the sparse pattern S1 from W + UV: First, it sorts the magnitude (i.e., absolute value) of individual weights and removes parameters with bottom 1 − k% magnitude. Second, we further tune the value of the low-rank update matrices for a few epochs, which is important to keep the performance as stated in Han et al. (2015b). Third, the sparse mask S1 that we derive will be applied on the pretrained weights only and do not affect the output from the update matrices, as W S1 + UV . For an input sample x, the output is calculated by (W S1 + UV + S2)x = W S1x + (UV + S2)x, where the first term W S1x brings significant resource-efficiency, and the second term brings parameter-efficiency.
B Pruning with structured sparse masks. The second method exploits structured sparse masks, whose regular patterns are more hardware friendly yet have worse performance than unstructured masks. Our DSEE considers `1 sparse regularization (Liu et al., 2017; Chen et al., 2021b) to craft high-quality structurally pruned subnetworks. More precisely, motivated by Chen et al. (2021b), we introduce learnable coefficients ξ before each attention head module, and append a `1 sparse penalty on ξ to the original loss. Detailed formulations are depicted as below:
We add trainable coefficients c before attention heads. The parameters c are optimized together with the decomposed matrices, i.e., U ,V and S2. An extra term λ‖c‖1 will be added to the training loss for sparse regularization. The value of λ is set to 1e-4. After training, we prune the attention heads that have the lowest contribution to the model (i.e., lowest c). We use a layer-wise pruning scheme that prunes the same proportion of heads in each attention layer. After pruning, several epochs of tuning are run to recover the performance of the pruned model. The size of update matrices will change after structured pruning, so we also need to change the dimension of U and V . Specifically, we change the size of V from Rr×n to Rr×[n×s] where s is the pruning ratio of the corresponding self-attention layer and [x] is the biggest integer not greater than x. The size of S2 is shrunk accordingly.
4 EXPERIMENT RESULTS
Datasets and models. We use three classical pre-trained language models in our experiments: BERTBASE (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and DeBERTa-large (He et al., 2020), which have 12/24/24 layers with hidden size of 768/1024/1024 and 110/354/380M trainable parameters, respectively. For BERT and DeBERTa, we evaluate our method on the GLUE benchmarks (Wang et al., 2018). For GPT-2, we use E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Nan et al., 2021) for evaluations.
Training and evaluation details. For BERT and DeBERTa, we follow the default settings in Wolf et al. (2019); Devlin et al. (2019). We use the AdamW (Loshchilov & Hutter, 2017) optimizer for downstream training. The batch size for BERT and DeBERTa is 32, 8 per GPU. We train three epochs to search the sparse mask S1, and continue to tune the model for three epochs to converge. The initial learning rates are reported in A1, and we linearly decay them. For GPT-2, we follow the same hyperparameters as in Hu et al. (2021). We train the model for five epochs to search for the mask, and further tune the pruned model for two epochs.
Evaluation Metrics. For tasks on BERT and DeBERTa, we use the accuracy, Matthew’s Correlation, and Pearson’s r in the evaluation by default, which is also the conventional setting in the NLP community. On GPT-2, we use BLEU (Papineni et al., 2002), METEOR (Denkowski & Lavie, 2014), TER (Snover et al., 2006) and NIST (Doddington, 2002) as our evaluation metrics. To evaluate the efficiency of models, we use the number of trainable parameters to measure the parameter efficiency, use Sparsity in Pretrained Weights to measure the resource efficiency, and FLOPs to evaluate the computational efficiency. We add a star sign (∗) to indicate the structured sparsity (i.e., structurally prune the pretrained weights).
Baselines. On BERT and DeBERTa, we conduct comparisons with the following baseline methods: ¶ Fine-tune: directly fine-tunes the full model; · EarlyBERT (Chen et al., 2021b); ¸ BERT Tickets (Chen et al., 2020); ¹ OMP: prunes the fine-tuned weights by magnitude, and fine-tune afterwards; and º LoRA (Hu et al., 2021). We also report the Huggingface’s fine-tuning results.
On GPT-2, we conduct comprehensive comparisons with multiple baseline methods: ¶ Adapters (Houlsby et al., 2019): insert adapters after linear layers; · FT-Top2: fine-tune the top 2 layers only; ¸: Prefix: prefix tuning introduced by Li & Liang (2021); and ¹ LoRA: low-rank decomposition, which assumes ∆W = UV; most results are directly cited from Hu et al. (2021).
4.1 EFFICIENT TUNING WITH DSEE
Parameter-efficiency with sparse masks. To verify that using simple low-rank adaptation (i.e., LoRA) has limitations, we compare its performance with the performance of our sparsity-embedded efficient fine-tuning. Table 1 proves that on MNLI, SST-2, CoLA, and STS-B, the simple decomposition form shows unsatisfactory results compared to sparsity-embedded decomposition. Adding a small proportion of parameters (only 3072 trainable parameters) can bring a performance boost to the model. Specifically, under
rank 8, the metrics increase (0.69/0.13/0.008/0.003) on SST-2/MNLI/CoLA/STS-B, respectively. Moreover, using only approximately half of the number of trainable parameters, our method can achieve comparable parameters compared to the state-of-the-art method LoRA.
Table 2 shows the performance on GPT-2. For all three tasks, our method can achieve comparable results with only about half of the trainable parameters with LoRA with rank four and make substantial performance gain compared to LoRA with rank two. On WebNLG, our method even achieves a higher
BLEU score, 55.56 versus 55.29, with half of the trainable parameters. Such a phenomenon indicates that adding a sparse matrix to the low-rank decomposition can make substantial improvement.
Resource- and parameter-efficiency with sparse masks. We report the performance of DSEE, including both the number of trainable parameters and the sparsity in pretrained weights. We use an unstructured sparsity of 50%, and structured sparsity of 25% and 33%, which is equal to pruning 1/4 and 1/3 heads from each attention layer. For BERTBASE and DeBERTa-large, we set the r (the low-rank dimension) to be 16, and N (the number of non-zero elements in S2 ) to be 64. We also prune each of the intermediate layers using a structured sparsity of 40% as in (Chen et al., 2021b). For GPT-2, we set r to be 2 in DSEE and 4 in LoRA. The choice of N remains 64.
Table 3 summarizes the performance on BERTBASE. On BERTBASE, our DSEE method can: ¶ achieve parameter-efficiency at high level and retain the performance on various downstream tasks. On BERTBASE, with only about 600K trainable parameters (110/0.5929 ≈ 200× smaller than the full model), DSEE can achieve comparable performance on all GLUE tasks. · DSEE can achieve resource-efficiency in the final models. Using either unstructured or structured sparse masks can reach our goal of reducing the number of parameters in the pretrained weights without sacrificing much performance. At the sparsity level of 50%, unstructured DSEE can achieve an accuracy of 90.46% on QNLI, surpassing the baseline of fine-tuning, whose result is 90.15%. At the sparsity level of 25%, the structured DSEE can have performance gains ranging from 0.46% to 2.22%, except for QQP and QNLI. At the sparsity level of 33%, the performance gains range from 0.11% to 2.04%, except for QQP and QNLI. ¸ Compared to other baselines, e.g., EarlyBERT and BERT Tickets, our method benefits from parameter efficiency, which updates the weights more efficiently. These observations validate the effectiveness of our DSEE method.
We also calculate the inference FLOPs of BERTBASE on STS-B dataset. The original BERTBASE on STS-B takes FLOPs of 3.7835 × 1014, while LoRA takes FLOPs of 3.8096 × 1014, which is 0.69% higher. Conversely, the structured version of DSEE takes a FLOPs of 2.4921 × 1014 at the sparsity of 25%, and 2.3867× 1014 at the sparsity of 33%, which is 34.61% and 37.38% lower than LoRA. This indicates that a large proportion of computational cost can be saved at inference phase.
Table 4 summarizes the performance on GPT-2, which shares a similar trend. Unstructured DSEE can achieve 2000× reduction in trainable parameters and 2× reduction in the final fine-tuned model size with almost no loss in downstream task performance compared to conventional finetuning. When compared to LoRA, the unstructured DSEE can retain the same level of the number of trainable parameters and downstream task performance, while showing a 2× reduction in the final finetuned model size. The structured DSEE on GPT-2 seems to be less competitive than on BERT, but it can still hold performance on E2E and WebNLG after pruning 25% of heads in attention modules.
Finally, we validate our method on DeBERTa. The results are displayed in Table 5. We use four datasets, CoLA, MNLI, MRPC, and RTE. They have greatly varied sizes, which are representatives of the GLUE benchmark. DeBERTa is a larger model, so applying low-rank decomposition with our
hyperparameters cannot match the performance of fine-tuning. Compared to LoRA, our method reaches higher performance on downstream tasks, albeit it needs slightly more training epochs. However, such a slight extra cost for searching the sparse mask S1 can be amortized by the efficiency gained in the future inference, since the burden on resources such as storing the pretrained weights is relieved.
4.2 UNDERSTANDING DSEE
The position of embedded sparsity masks plays a crucial role in our proposed DSEE. For instance, applying sparse masks to the pre-trained weights or weight updates produces resourceand parameter-efficiency. Precisely, we compare four different methods on BERTBASE:
one-shot magnitude weight pruning (W S1), two DSEE’s variants (W S1 + UV) and W + UV + S2), DSEE (W S1 + UV + S2). The results are collected in Table 6. We can see that: ¶ NO embedded sparsity in the pretrained weights yields the overall best performance. This is intuitive since the valuable knowledge learned from massive pretraining data is intact and not removed. · Embedding sparsity into the pretrained weights harms only little to no performance. A similar conclusion is drawn by Chen et al. (2020) which validated that a pruned model can have a similar performance as the dense model. ¸ Using sparsity-embed efficient fine-tuning with the sparse pre-trained weights can also preserve performance, as well as achieve parameter efficiency.
4.3 ABLATION AND VISUALIZATION
Different methods to generate sparse matrix S2 In this section, we compare the performance of BERTBASE using different methods to generate the sparse matrix S2 and the corresponding space Ω. Except for the aforementioned matrix decomposition method, we try to (1) randomly sample indexes into Ω; and (2) directly select the indexes of elements of highest magnitude ofW into Ω. In Figure 2 we can see that using the matrix decomposition method has the highest metric overall. It is also noteworthy that sparse matrices generated by other methods will sometimes harm the performance.
Another factor is the number of non-zero elements inside S2. More non-zero elements in S2 reduce the parameter efficiency, while less non-zero elements increase the efficiency but may not be much beneficial to the accuracy or other metrics. Figure 2 also shows the relationship between number
of non-zero elements in S2 and the performance on fine-tuned model on SST-2. From the graph, we can see that using an N of 64 seems to have the most stable results compared to other choices. Another important piece of information we can derive from the figure is that a higher number of non-zero elements in the S2 does not guarantee better performance.
Ablation of # ranks. The rank of lowrank decomposition r is crucial to the transfer learning performance. A small r will result in lower representation ability, and a large r will bring more parameters and reduce the parameter efficiency. To find the best value, we conduct experiments with different ranks r on four datasets, SST-2, MNLI, CoLA, and STSB. The results are displayed in Figure 3. We add quadratic trend lines in the graph along with the discrete points to smooth
the results. We can draw two conclusions from the graphs: ¶ Overall, the final performance is positively correlated with the number of trainable parameters; however, on some tasks (MNLI, CoLA) higher number of parameters (namely, the r for U and V) will lead to lower performance. · With a sparse matrix embedded, the performance of the trained models can be improved within a range of trainable parameters. On SST-2 and CoLA, the applicable range seems to be 104.5 ∼ 106. On STS-B, our method can consistently outperform the performance of using low-rank decomposition only. For MNLI, the two methods behave similarly while our method is slightly better.
Ablation of sparsity. Although creating sparsity in pretrained weights W does not change the number of trainable parameters of models, the different levels of sparsity control the number of non-zero elements and thereby influence the representation ability. Therefore, we conduct experiments on BERT with different sparsity, ranging from 10% to 60%. The results are in Figure A5. From the figures, we can draw conclusions: ¶ DSEE out-performs magnitude pruning at low sparsity (< 50%) with respect to the performance on down-
stream tasks. · DSEE surpasses vanilla magnitude pruning with respect to the number of parameters. For each self-attention layer, using vanilla magnitude pruning needs to train all the weights, while DSEE only needs to update the three low-rank matrices.
Visualizations. Figure 4 shows the distribution of weight change. From the graph, we can see that most weights are located around 0. Such a phenomenon indicates that a natural sparsity exists within the update matrices, motivating us to explore sparse structures along with matrix decomposition.
5 CONCLUSION
This paper draws on the prior of sparsity and establishes the DSEE framework. It is the first attempt towards jointly optimizing both parameter efficiency of the fine-tuning process, and the resource efficiency of the fine-tuned model. On state-of-the-art large-scale language models (e.g., BERT, GPT, and DeBERTa) and across several of datasets, DSEE consistently demonstrates highly impressive parameter, training, and inference efficiency, in addition to preserving a competitive downstream transfer performance. Our future work targets extending DSEE to the finetuning of large-scale computer vision and/or multi-modal pre-trained models.
A1 MORE IMPLEMENTATION DETAILS
We report the setting of other hyperparameters of our experiments, such as learning rate, in Table A7. The device we used for experiments are various, including NVIDIA GeForce GTX 1080 Ti, GeForce RTX 2080 Ti, Titan RTX, and A6000.
Architecture Method Parameters Dataset MNLI QNLI QQP SST-2 CoLA MRPC RTE STS-B
BERTBASE Fine-tune Learning Rate 5e-5 BERTBASE DSEE (before pruning) Learning Rate 5e-5 5e-5 5e-5 2e-4 1e-3 8e-4 1e-3 8e-4 BERTBASE DSEE (after pruning) Learning Rate 5e-5 5e-5 5e-5 5e-5 1e-3 5e-4 5e-4 5e-4
DeBERTa-large LoRA & DSEE (before pruning) Learning Rate 1e-5 - - - 1e-3 8e-4 8e-5 - DeBERTa-large DSEE (after pruning) Learning Rate 1e-5 - - - 5e-5 8e-4 6e-5 -
Table A7: Hyper-parameters we used on different datasets.
GreBsmo Algorithm GreBsmo (Zhou & Tao, 2013) is an algorithm for solving the Robust PCAlike methods. The optimization of U , V , and S follows the following iterative rules:
Uk = Q,QR ( (X − Sk−1)V Tk−1 ) = QR Vk = Q T (X − Sk−1)
Sk = Sλ (X − UkVk) , (2)
where X is the original dense matrix, QR(·) means the QR decomposition, Sλ(·) indicates the soft-threshold operation, and the subscripts k indicates the optimization step.
A2 MORE EXPERIMENT RESULTS
Ablation of Sparsity To study the relationship between sparsity of unstructured pruning and the behavior of our DSEE, we conduct an ablation study on various datasets in GLUE benchmarks. The results are shown in Figure A5
0.880
0.885
0.890
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE0.880
0.885
0.890
20 40 60 Sparsity
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE
ST SB
89.5
90.0
90.5
91.0
91.5
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
89.5
90.0
90.5
91.0
91.5
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
SS T2
83
84
85
86
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
83
84
85
86
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M R PC
67 68 69 70 71 72 73
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
67
68
69
70
71
72
73
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
R T E
80
81
82
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
80
81
82
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M N L I
0.52
0.56
0.60
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
C or
re la
tio n
Methods Magnitude Pruning DSEE0.52
0.56
0.60
20 40 60 Sparsity
C or
re la
tio n
Methods Magnitude Pruning DSEE
C oL A
Figure A5: DSEE performance compared to vanilla magnitude pruning at different sparsity. Magnitude Pruning: vanilla magnitude pruning which tunes W directly.
Results with more runs. We conduct experiments with three runs. The mean and standard deviation of accuracy of LoRA and DSEE (without pruning) are shown in Table A8. We have also included the p-value of t-tests to show the significance of the performance gaps. On half of the
A14
datasets, the p-value is around 0.1; and on three datasets the p-value is between 0.2 and 0.4. The reason why p-value is large on some datasets is probably because the number of experiments still has room to increase.
Table A8: Performance comparison of different methods on BERTBASE on GLUE benchmarks. The first row of each method indicates the mean accuracy, and the second row indicates the standard deviation.
Methods # Trainable Sparsity in DatasetParameters Pretrained Weights CoLA STS-B MNLI QQP QNLI MRPC RTE SST-2
LoRA 589.8K 0% 57.38 88.81 80.88 86.45 88.49 86.03 71.00 92.171.43 0.23 0.41 0.14 0.10 1.49 1.10 0.27
DSEE 592.9K 0% 58.73 89.16 81.03 86.53 88.58 85.78 71.12 92.550.95 0.03 0.62 0.06 0.17 1.07 0.62 0.12
P-value of matched pair t-tests 0.086 0.054 0.383 0.121 0.276 0.572 0.277 0.031
DSEE 592.9K 50% 55.60 88.39 81.16 87.23 88.88 84.88 70.09 91.000.81 0.21 0.23 0.03 0.30 0.93 1.37 0.13
Inference time on BERTBASE We have recorded the inference time of BERTBASE on various GLUE benchmarks, QQP, STS-B, CoLA and RTE. The number of samples in their evaluation set range from 277 to 40430, so they can be considered representative. The results are shown in Table A9. From the table we can see that, the inference time are greatly saved after using the structured version of DSEE.
Table A9: Inference time of different methods on BERTBASE on GLUE benchmarks. The sparsity with star signs indicates that it is a structured sparsity.
Dataset # eval samples MethodsLoRA DSEE (25%∗) DSEE (33%∗)
QQP 40,430 50.773 36.391 34.946 STS-B 1,500 2.033 1.506 1.337 CoLA 1,043 1.316 0.931 0.877 RTE 277 0.464 0.420 0.376
Convergence Speed We have compared the convergence speed (i.e., how the test accuracy changes) of LoRA and DSEE. From Figure A6 we can see that the convergence speeds of the two methods are not significantly different. This means introducing a sparse component into the decomposition form does not affect the convergence dynamics.
0.75
0.80
0.85
0.90
0 50 100 Evaluation Step
Te st
A cc
ur ac
y
Method
DSEE
LoRA
Figure A6: Convergence speed of two methods (LoRA and DSEE). The x-axis represents evaluation steps and y-axis represents the test accuracy.
A15 | 1. What is the main contribution of the paper regarding efficient fine-tuning and model size compression?
2. What are the strengths and weaknesses of the proposed approach, particularly in its organization and motivation?
3. Are there any concerns regarding the comparison with previous works and the methodology used in the experiments?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. What are the questions raised by the reviewer regarding the efficacy of each step in DSEE and the ablation study? | Summary Of The Paper
Review | Summary Of The Paper
This paper tries to solve both efficient fine-tuning and model size compression simultaneously. It adopts the previous framework to model W and \delta W together and use different existing approaches to learn 2 parts in order.
Review
Strength:
The paper is presented in an easy to read manner.
I believe it's an important research topic to work on which can have a broad impact.
Weakness:
The organization can be improved. I feel the main motivation is actually in section 3 (page 4)
"However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside......"
but authors didn't put it in the introduction. This makes me feel the method is not well motivated. And consequently I think the proposed method is a bit ad-hoc. There are many details of thinking not well explained so maybe it can be an interesting method but currently it reads to me that it's fairly incremental. I think to make me increase the score, the introduction should introduce more rationale of previous methods and why the proposed method can improve.
I believe the authors overlook an important work in the line of efficient fine-tuning (https://arxiv.org/abs/2101.00190). I didn't run the code myself but one of my collaborators told me this method in practice is fairly competitive without any effort of parameter tuning. I think the comparison to the work should be added.
I think listing #trainable is not a fair comparison. To best of my knowledge, storing the sparse coordinate information requires additional coding of the row/col of the scalar which is omitted by most sparse-pruning papers. But when it comes to comparison of low-rank methods, it will create an unfair situation as low-rank methods doesn't require additional memory usage so maybe other researchers doesn't put emphasis on this but I insisted the memory footprint information should be represented in the result tables.
Also, sparse method is in general slow without hardware accelerator. I think authors also know this as they wrote in the paper:
It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016).
But somehow I didn't see any actions to cope with this and they just listed FLOP reduced which is again unfair to other structured methods such as structure pruning and low-rank. I think it's ok to require extra hardware accelerator. But to really benefit the research community, it's important also to present the weakness of method and I believe it's important to show the real wall-clock inference time instead of just the simulated #FLOP reduced. It's also interesting to know what's the wall-clock time saving in efficient fine-tuning part. As the low-rank method in general is easy to train and the proposed method seems to have many steps. So it seems to me that it's possible low-rank methods will have a bit more parameters but converges faster so the real time saving is better than the proposed method.
In fact, I think the empirical experiments doesn't really show a significant improvement over previous methods. For example, in Table 4 LoRA with 0.39M outperforms DSEE 0.17M and honestly 0.39M really doesn't increase too many parameters compared to DSEE. So I think the justified the efficacy of each step in DSEE and frankly present strength/weakness is rather important.
This leads to my concern that each functionality of the method is not well discussed. In particular, the ablation study of solving eq 1 looks weird to me. As an apparent way to solve eq 1 is via iterative methods. Fix S2, UV can be optimally decided by SVD and fixing UV S2 can also be determined easily. So to me I think iteratively solving UV and S2 is the best candidate to compare but it's not appearing in the paper. And we don't even know what's inside the GreBsmo algorithm as authors didn't explain it at all. That confused me a lot.
And if we look into the ablation study in Figure 2, there are some cases where other heuristics actually outperformed the DSEE. It's important to study why such cases happen but author didn't touch this part. I feel this then make the experiments not able to justify the proposed method. |
ICLR | Title
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Abstract
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many finetuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resourceand parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning by enforcing sparsity-aware weight updates on top of the pretrained weights; and (ii) resource-efficient inference by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and `1 sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about 35% inference FLOPs savings with < 1% trainable parameters and comparable performance to conventional fine-tuning. 1
1 INTRODUCTION
Most recent NLP applications have been following the pre-train then fine-tune paradigm, where we start from a gigantic pre-trained model and fine-tune it towards downstream tasks. Conventional fine-tuning works by updating all of the parameters of the pre-trained model. As the size of pretrained models grows, updating all parameters becomes less feasible in most practical scenarios, due to the expensive memory and computational requirements. For example, BERTBASE (Devlin et al., 2019) has 110M trainable parameters, while GPT-2 (Radford et al., 2019) has up to 1.5B and the largest version of GPT-3 (Radford et al., 2019) has an astonishing 175B trainable parameters. As such, conventional fine-tuning of the larger models could require hundreds of GPU hours. Another downside of this paradigm is that it requires storing as many parameters, as in the large-scale pretrained models, for each downstream task. That poses impediments to the deployment in real-world resource-constrained environments.
One solution to address the extensive resource requirement of conventional fine-tuning is model pruning (LeCun et al., 1990; Han et al., 2015a; Ren et al., 2018; He et al., 2017; Liu et al., 2017), where unnecessary weights are eliminated to shrink the model size. For example, Chen et al. (2021b) leverages `1 regularization to remove insignificant attention heads and gains 35% ∼ 45% training time with comparable performance. Guo et al. (2020) learns sparse task-specific “diff” vectors for various downstream fine-tuning tasks, leading to great memory savings. All these studies indicate the rise of sparsity naturally during fine-tuning a general-purpose pre-trained model,
1All codes are provided in the supplement.
to some specialized downstream functionality. One potential interpretation, of why sparsity arises, is that different subsets of the parameters may be responsible for different downstream tasks and data domains (Sanh et al., 2020). However, identifying appropriate sparse pruning masks requires burdensome training of models with full weights, which can still be unaffordable for many practitioners. For example, fine-tuning a large pre-trained language model like GPT-3 for just one step consumes at least 1.2TB of VRAM and requires 96 pieces of NVIDIA Tesla (Hu et al., 2021).
One parallel alternative is designing parameter-efficient fine-tuning algorithms, which aims to only optimize a small portion of weights while fixing most of the pre-trained weights during the downstream task training step. Pioneering works along this line, which utilize adapters (Houlsby et al., 2019) or low-rank decomposition (Hu et al., 2021), can significantly reduce the number of trainable parameters while preserving good fine-tuning performance. Introducing only a small number of task-specific parameters for each new downstream task can substantially improve the memory and deployment efficiency of models as it allows us to reuse the remaining unchanged/shared parameters. However, there are two major hurdles of current parameter-efficient fine-tuning: (i) it does not yield any inference efficiency gains since the full pre-trained weights are still required to calculate outputs; and (ii) current methods assume the weight update to be either sparse (Guo et al., 2020) or low-rank (Hu et al., 2021), yet those assumptions might be oversimplified and overly restricted to allow for effective updates. For example, the low-rank assumptions on weight matrices might be overly strong since some weights cannot be fitted in the low-rank space (Yu et al., 2017). Recently Chen et al. (2021a) also find that using both sparse and low-rank components performs better than either of them. These observations have inspired us to explore better parameter-efficiency methods.
To tackle both resource- and parameter-efficiency issues of large model fine-tuning, we explicitly draw on the prior of sparsity for both weight updates and the final weights, and establish a dually sparsity-embedding efficient tuning (DSEE) framework. From a pre-trained model, DSEE first adopts a sparsity-aware low-rank weight update to achieve parameter efficiency of the fine-tuning process; and then enforces a sparse weight structure by masking to achieve resource efficiency of the fine-tuned model at inference time. Our contributions can be summarized as follows:
• We propose the dually sparsity-embedding efficient tuning (DSEE), which unifies sparsityaware weight update and sparse pretrained weight in fine-tuning gigantic pre-trained models. It is the first attempt towards jointly optimizing both parameter efficiency of the finetuning process, and the resource efficiency of the fine-tuned model.
• We exploit both unstructured and structured sparse patterns in the DSEE framework. For weight updates, we find the injected sparsity prior to greatly enhance existing parameterefficient update schemes, and to further trim down the needed amount of trainable parameters. For the final weights, we learn well-performed sparse masks from the pre-trained weights, leading to substantial computation and memory reductions at inference.
• Extensive experiments demonstrate the effectiveness of our proposal across various representative pre-trained languages models, such as BERT, GPT-2, and DeBERTa; and on diverse evaluation benchmarks, such as E2E, DART, WebNLG, and GLUE. Specifically, on (E2E, Dart, WebNLG), our methods can achieve a BLUE score of (69.75, 55.40, 46.66) with less than 1% of total trainable parameters. On BERT, our method can save about 35% FLOPs, compared to traditional downstream fine-tuning.
2 RELATED WORK
Pre-trained language models. Transformer (Vaswani et al., 2017) is a sequence-to-sequence model that heavily uses the concept of self-attention. Later on, numerous transformer-based models have been proposed and show overwhelming performance on natural language processing (NLP) and on computer vision (CV) tasks. Devlin et al. (2019) designed BERT that pre-train deep bidirectional representations from unlabeled text and reached powerful performance. Liu et al. (2019) found that BERT were terribly undertrained and proposed RoBERTa, an enhanced training recipe for BERT which can greatly boost the performance. He et al. (2020) proposed decoding-enhanced BERT with disentangled attention (DeBERTa) that incorporates the disentangled attention mechanism and an improved mask encoder to enhance BERT and RoBERTa. More variants like XL-Net, Albert, and Electra have also been proposed in recent years (Yang et al., 2019; Lan et al., 2019; Clark et al., 2019). The series of GPT models (Radford et al., 2019; Brown et al., 2020) are later developed based on transformers decoder blocks rather than encoder blocks like BERT, which again have
shown superior performance on different tasks. These large models pretrained on a large amount of unlabelled texts would need to be further fine-tuned on downstream tasks for better performance. One of the accompanying disadvantages of these pre-training models with tremendous parameter counts (e.g., 175B in GPT-3) is the unaffordable computational cost for further fine-tuning.
Pruning and Low-rank decomposition. Pruning is a widely-used model compression technique. It can reduce the number of parameters inside models, which possibly brings training and inference efficiency. Along with weight pruning method (Han et al., 2015b) being one of the most effective methods (Gordon et al., 2020), various criterion have been proposed to select insignificant weights for pruning, such as Taylor approximation (Molchanov et al., 2019), Hessian score approximation (Hassibi & Stork, 1993), and other saliency scores such as SNIP (Lee et al., 2018), GraSP (Wang et al., 2019) and SynFlow (Tanaka et al., 2020). Several pruning methods have been commonly adapted to compress language models (McCarley et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2020; Chen et al., 2021b). Specifically, McCarley et al. (2019) proposed to prune attention heads that had less contribution to the model. Wang et al. (2020) pruned BERT models by involving low-rank factorization and `0 regularization. Sanh et al. (2020) invented an improved version of magnitude pruning (i.e., pruning based on the weight change) that can better suit the transfer learning. Chen et al. (2021b) performed structured pruning on BERT via `1 sparse regularization, which reduced a large portion of parameters and decreased the training cost.
Low-rank approximation (Ye, 2005) is also vastly studied. One classical scenario is robust principle component analysis (Candès et al., 2011), which decomposes a matrix into a low-rank plus a sparse component. Existing literature shows that in deep learning, the learned over-parameterized models often naturally bear approximate low-rank weight structures (Oymak et al., 2019; Yu et al., 2017). Some (Jaderberg et al., 2014; Povey et al., 2018; Sainath et al., 2013; Zhang et al., 2014; Zhao et al., 2016) have explicitly imposed the low-rank constraint during training. Wang et al. (2020); Hu et al. (2021) utilized low-rank decomposition to shrink the model size and trim down the trainable parameters during fine-tuning. However, to our best knowledge, integrating sparsity and low-rank structures has never been studied before for efficient fine-tuning of pre-trained language models.
Parameter-efficient adaptation. Parameter-efficient adaptation aims at reducing the number of trainable parameters when fine-tuning the models across different downstream domains. Unlike pruning, it generates updates that can be represented by fewer parameters instead of building sparse models. Various approaches are invented to achieve the goal. Rebuffi et al. (2017); Houlsby et al. (2019) inserted and only trained adapters between existing layers, whose parameters are much less compared to the pretrained models. Guo et al. (2020) leveraged `0 regularization to limit the number of non-zero elements in the update vectors. Lester et al. (2021); Li & Liang (2021) introduced efficient prompt tuning which optimizes only a small continuous task-specific vector. Hu et al. (2021) proposed a low-rank decomposition-based method that can also significantly reduce the number of trainable parameters. However, fine-tuned models yielded by these methods work have the same amount of weights as the pre-trained starting point; hence they contribute no resource efficiency of the final model.
3 METHODOLOGY
In this section, we begin by describing our notations and definitions of sparsity generation and parameter-efficient fine-tuning in Section 3.1. Then, we introduce the (dually) sparsity-embedded efficient fine-tuning algorithms in Sections 3.2 and 3.3.
3.1 PRELIMINARIES
Sparsity generation and resource-efficient fine-tuning. We adopt both unstructured and structured pruning methods to produce sparsity. They can lead to resource-efficiency including memory and computation savings.
Let W ∈ Rm×n denote a weight matrix. The goal of pruning is to find a binary mask S ∈ {0, 1}‖W‖0 , where ‖W‖0 is the number of parameters in W . The mask S is applied to W and results in a sparse weight W S. For unstructured pruning, only memory cost is saved; but for structured pruning, it helps save computational cost since the sparse weights can be smaller in size by wiping out all-zero columns or rows. However, the performance of networks after structured pruning is often shown to be inferior compared with the unstructured pruning counterpart.
Parameter-efficient fine-tuning. To leverage the knowledge in pre-trained weights W , downstream models learn task-specific weight update ∆W via fine-tuning and generate predictions with weights W + ∆W . The output of models is therefore calculated as (W + ∆W)x where x is the input. Since ∆W has the same size of W , learning the update matrices usually requires massive resources as the size of the pre-trained model increases. Parameter-efficient fine-tuning try to solve this problem by using as few trainable parameters as possible to represent ∆W , while maintaining competitive downstream fine-tuning performance. Previous literature reaches the goal via either sparsifying weight update matrices ∆W (Guo et al., 2020) or leveraging low-rank decomposed matrices to compute ∆W (Hu et al., 2021), while in our work we combine both of them.
Algorithm 1: Sparsity-Embedded LowRank Decomposition Input: Pretrained weightsW , number of non-zero elements N Output: Sparse matrices S2 1 Initialize S2 to be an empty set. 2 for each self-attention projection weights wi inW do /* Decomposition */ 3 Perform matrix decomposition: wi ≈ UV + S ′ by solving the optimization problem in Eqn.1. /* Identify important elements to form S2 */ 4 Perform thresholding on S ′: Keep N
elements in S ′ with top magnitudes, and set the rest 0.
5 Append S ′ into S2. 6 end
Algorithm 2: DSEE Input: Pretrained weightsW , number of
non-zero elements N , desired sparsity s, loss function L
Output: Sparse mask S1, matrices U ,V,S2 1 DecomposeW into U ,V and S2 2 (Re-)Initialization: U = 0,V ∼ N (0, 0.02),Ω = indexes of non-zero elements in S2,S = 0 /* I: train before pruning */ 3 Train U ,V,S with respect to L under the constraint of PΩC (S) = 0. /* II: pruning the model */ 4 Prune (1-s%) parameters inW globally by sorting the magnitude ofW + UV + S, deriving the sparsity mask S1 /* III: tuning after pruning */ 5 Tune U ,V,S2 for E epochs for recovering the performance.
3.2 SPARSITY-EMBEDDED PARAMETER-EFFICIENT FINE-TUNING
A recent study (Hu et al., 2021) enforces low-rank constraint to weight update tensors ∆W , and obtains a satisfactory trade-off between parameter-efficiency and model quality. However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside the low-rank subspace, creating sparse “residuals”. Inspired by these observations, we investigate a new sparsity-aware low-rank subspace of ∆W , and introduce the first component of our proposal in Figure 1, i.e., sparsity-embedded parameter-efficient fine-tuning.
Specifically, we identify a sparse subspace Ω, that we can project our update matrices ∆W to. The update matrix is then decomposed into two components: a low-rank part which is represented by the multiplication of two low-rank matrices U ∈ Rm×r and V ∈ Rr×n, and a sparse residual
S2 = PΩ(∆W), where PΩ(∆W) =
{ si,j , (i, j) ∈ Ω
0, (i, j) ∈ ΩC , i = 1, 2, . . . ,m, j = 1, 2, . . . , n.
As illustrated in Figure 1, the update matrix ∆W can be approximated by UV + S2, in which the U , V , and the non-zero element of S2 are learnable parameters while Ω is fixed once determined. Compared to the full fine-tuning schemes has m × n individual trainable parameters, our method only has (m+n)×r+card(Ω). If r is smaller than m×n−card(Ω)m+n / 0.5 min{m,n}, which is a loose bound, our method is capable of substantially reducing trainable parameters for downstream finetuning. In practice, the value of r is very small compared tom and n so the savings are considerable.
The next question is how to find appropriate Ω. Motivated by (Candès et al., 2011), we formulate our sparsity-aware decomposition as the following optimization problem:
min U,V,S2
1 2 ‖W − UV − S2‖2F
s.t. rank(U) ≤ r, rank(V) ≤ r, card(S2) ≤ c, (1)
where rank(·) indicates the rank and card(·) indicates the cardinality of a matrix. Here we derive Ω from pre-trained weights W , which is different from previous low-rank techniques for model compression since we perform prior-training decomposition and can not access ∆W before finetuning. In this way, we actually assume that ∆W shares a similar crucial subspace Ω with W (Sun et al., 2018). We use the GreBsmo algorithm (Zhou & Tao, 2013) to solve this optimization problem. This method adopts a similar idea as random projection and can solve the optimization problem rapidly. The details of the algorithm are in Appendix.
Algorithm 1 summarizes the detailed procedure of our proposal. We first decompose the pretrained weightsW into three matrices: U , V and S2. U and V are matrices of low-rank and S2 is a sparse matrix. After the decomposition, we re-initialize U and V to be of shape Rm×r and Rr×n. The initial values for U are set to 0, and the initial values for V will follow a normal distribution as Hu et al. (2021) did. We do not use the decomposed value inside S2, but only the index of non-zero elements. We collect the indexes of non-zero elements into Ω, and reset the values of S2 back to 0. During the training, we only update the elements of S2 whose indexes are in Ω, as well as U and V .
3.3 DUALLY SPARSITY-EMBEDDED EFFICIENT TUNING (DSEE)
Besides parameter-efficiency, resource-efficiency is another major goal of our proposed framework. Sparsity-embedded low-rank weight updates alone do not necessarily lead to computational cost reductions since it works together with the dense pre-trained weights and the massive computation during forwarding processes is inevitable. To achieve both resource- and parameter-efficiency, we also promote a sparse weight structure to the final fine-tuned model, as demonstrated by the sparse mask S1 in Figure 1. Our DSEE explores both unstructured/structured sparsity patterns as follows. B Pruning with unstructured sparse masks. The unstructured sparse mask is the most widely used mask since it usually produces almost undamaged performance compared to its dense counterpart (Han et al., 2015a). It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016). Specifically, based on the weight matrixW , a sparse mask S1 can be calculated with various approaches, either heuristic or optimization-based. We adopt the one-shot magnitude pruning (Han et al., 2015a) in DSEE, due to its simplicity and competitive performance.
In our context, we create the sparse pattern S1 from W + UV: First, it sorts the magnitude (i.e., absolute value) of individual weights and removes parameters with bottom 1 − k% magnitude. Second, we further tune the value of the low-rank update matrices for a few epochs, which is important to keep the performance as stated in Han et al. (2015b). Third, the sparse mask S1 that we derive will be applied on the pretrained weights only and do not affect the output from the update matrices, as W S1 + UV . For an input sample x, the output is calculated by (W S1 + UV + S2)x = W S1x + (UV + S2)x, where the first term W S1x brings significant resource-efficiency, and the second term brings parameter-efficiency.
B Pruning with structured sparse masks. The second method exploits structured sparse masks, whose regular patterns are more hardware friendly yet have worse performance than unstructured masks. Our DSEE considers `1 sparse regularization (Liu et al., 2017; Chen et al., 2021b) to craft high-quality structurally pruned subnetworks. More precisely, motivated by Chen et al. (2021b), we introduce learnable coefficients ξ before each attention head module, and append a `1 sparse penalty on ξ to the original loss. Detailed formulations are depicted as below:
We add trainable coefficients c before attention heads. The parameters c are optimized together with the decomposed matrices, i.e., U ,V and S2. An extra term λ‖c‖1 will be added to the training loss for sparse regularization. The value of λ is set to 1e-4. After training, we prune the attention heads that have the lowest contribution to the model (i.e., lowest c). We use a layer-wise pruning scheme that prunes the same proportion of heads in each attention layer. After pruning, several epochs of tuning are run to recover the performance of the pruned model. The size of update matrices will change after structured pruning, so we also need to change the dimension of U and V . Specifically, we change the size of V from Rr×n to Rr×[n×s] where s is the pruning ratio of the corresponding self-attention layer and [x] is the biggest integer not greater than x. The size of S2 is shrunk accordingly.
4 EXPERIMENT RESULTS
Datasets and models. We use three classical pre-trained language models in our experiments: BERTBASE (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and DeBERTa-large (He et al., 2020), which have 12/24/24 layers with hidden size of 768/1024/1024 and 110/354/380M trainable parameters, respectively. For BERT and DeBERTa, we evaluate our method on the GLUE benchmarks (Wang et al., 2018). For GPT-2, we use E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Nan et al., 2021) for evaluations.
Training and evaluation details. For BERT and DeBERTa, we follow the default settings in Wolf et al. (2019); Devlin et al. (2019). We use the AdamW (Loshchilov & Hutter, 2017) optimizer for downstream training. The batch size for BERT and DeBERTa is 32, 8 per GPU. We train three epochs to search the sparse mask S1, and continue to tune the model for three epochs to converge. The initial learning rates are reported in A1, and we linearly decay them. For GPT-2, we follow the same hyperparameters as in Hu et al. (2021). We train the model for five epochs to search for the mask, and further tune the pruned model for two epochs.
Evaluation Metrics. For tasks on BERT and DeBERTa, we use the accuracy, Matthew’s Correlation, and Pearson’s r in the evaluation by default, which is also the conventional setting in the NLP community. On GPT-2, we use BLEU (Papineni et al., 2002), METEOR (Denkowski & Lavie, 2014), TER (Snover et al., 2006) and NIST (Doddington, 2002) as our evaluation metrics. To evaluate the efficiency of models, we use the number of trainable parameters to measure the parameter efficiency, use Sparsity in Pretrained Weights to measure the resource efficiency, and FLOPs to evaluate the computational efficiency. We add a star sign (∗) to indicate the structured sparsity (i.e., structurally prune the pretrained weights).
Baselines. On BERT and DeBERTa, we conduct comparisons with the following baseline methods: ¶ Fine-tune: directly fine-tunes the full model; · EarlyBERT (Chen et al., 2021b); ¸ BERT Tickets (Chen et al., 2020); ¹ OMP: prunes the fine-tuned weights by magnitude, and fine-tune afterwards; and º LoRA (Hu et al., 2021). We also report the Huggingface’s fine-tuning results.
On GPT-2, we conduct comprehensive comparisons with multiple baseline methods: ¶ Adapters (Houlsby et al., 2019): insert adapters after linear layers; · FT-Top2: fine-tune the top 2 layers only; ¸: Prefix: prefix tuning introduced by Li & Liang (2021); and ¹ LoRA: low-rank decomposition, which assumes ∆W = UV; most results are directly cited from Hu et al. (2021).
4.1 EFFICIENT TUNING WITH DSEE
Parameter-efficiency with sparse masks. To verify that using simple low-rank adaptation (i.e., LoRA) has limitations, we compare its performance with the performance of our sparsity-embedded efficient fine-tuning. Table 1 proves that on MNLI, SST-2, CoLA, and STS-B, the simple decomposition form shows unsatisfactory results compared to sparsity-embedded decomposition. Adding a small proportion of parameters (only 3072 trainable parameters) can bring a performance boost to the model. Specifically, under
rank 8, the metrics increase (0.69/0.13/0.008/0.003) on SST-2/MNLI/CoLA/STS-B, respectively. Moreover, using only approximately half of the number of trainable parameters, our method can achieve comparable parameters compared to the state-of-the-art method LoRA.
Table 2 shows the performance on GPT-2. For all three tasks, our method can achieve comparable results with only about half of the trainable parameters with LoRA with rank four and make substantial performance gain compared to LoRA with rank two. On WebNLG, our method even achieves a higher
BLEU score, 55.56 versus 55.29, with half of the trainable parameters. Such a phenomenon indicates that adding a sparse matrix to the low-rank decomposition can make substantial improvement.
Resource- and parameter-efficiency with sparse masks. We report the performance of DSEE, including both the number of trainable parameters and the sparsity in pretrained weights. We use an unstructured sparsity of 50%, and structured sparsity of 25% and 33%, which is equal to pruning 1/4 and 1/3 heads from each attention layer. For BERTBASE and DeBERTa-large, we set the r (the low-rank dimension) to be 16, and N (the number of non-zero elements in S2 ) to be 64. We also prune each of the intermediate layers using a structured sparsity of 40% as in (Chen et al., 2021b). For GPT-2, we set r to be 2 in DSEE and 4 in LoRA. The choice of N remains 64.
Table 3 summarizes the performance on BERTBASE. On BERTBASE, our DSEE method can: ¶ achieve parameter-efficiency at high level and retain the performance on various downstream tasks. On BERTBASE, with only about 600K trainable parameters (110/0.5929 ≈ 200× smaller than the full model), DSEE can achieve comparable performance on all GLUE tasks. · DSEE can achieve resource-efficiency in the final models. Using either unstructured or structured sparse masks can reach our goal of reducing the number of parameters in the pretrained weights without sacrificing much performance. At the sparsity level of 50%, unstructured DSEE can achieve an accuracy of 90.46% on QNLI, surpassing the baseline of fine-tuning, whose result is 90.15%. At the sparsity level of 25%, the structured DSEE can have performance gains ranging from 0.46% to 2.22%, except for QQP and QNLI. At the sparsity level of 33%, the performance gains range from 0.11% to 2.04%, except for QQP and QNLI. ¸ Compared to other baselines, e.g., EarlyBERT and BERT Tickets, our method benefits from parameter efficiency, which updates the weights more efficiently. These observations validate the effectiveness of our DSEE method.
We also calculate the inference FLOPs of BERTBASE on STS-B dataset. The original BERTBASE on STS-B takes FLOPs of 3.7835 × 1014, while LoRA takes FLOPs of 3.8096 × 1014, which is 0.69% higher. Conversely, the structured version of DSEE takes a FLOPs of 2.4921 × 1014 at the sparsity of 25%, and 2.3867× 1014 at the sparsity of 33%, which is 34.61% and 37.38% lower than LoRA. This indicates that a large proportion of computational cost can be saved at inference phase.
Table 4 summarizes the performance on GPT-2, which shares a similar trend. Unstructured DSEE can achieve 2000× reduction in trainable parameters and 2× reduction in the final fine-tuned model size with almost no loss in downstream task performance compared to conventional finetuning. When compared to LoRA, the unstructured DSEE can retain the same level of the number of trainable parameters and downstream task performance, while showing a 2× reduction in the final finetuned model size. The structured DSEE on GPT-2 seems to be less competitive than on BERT, but it can still hold performance on E2E and WebNLG after pruning 25% of heads in attention modules.
Finally, we validate our method on DeBERTa. The results are displayed in Table 5. We use four datasets, CoLA, MNLI, MRPC, and RTE. They have greatly varied sizes, which are representatives of the GLUE benchmark. DeBERTa is a larger model, so applying low-rank decomposition with our
hyperparameters cannot match the performance of fine-tuning. Compared to LoRA, our method reaches higher performance on downstream tasks, albeit it needs slightly more training epochs. However, such a slight extra cost for searching the sparse mask S1 can be amortized by the efficiency gained in the future inference, since the burden on resources such as storing the pretrained weights is relieved.
4.2 UNDERSTANDING DSEE
The position of embedded sparsity masks plays a crucial role in our proposed DSEE. For instance, applying sparse masks to the pre-trained weights or weight updates produces resourceand parameter-efficiency. Precisely, we compare four different methods on BERTBASE:
one-shot magnitude weight pruning (W S1), two DSEE’s variants (W S1 + UV) and W + UV + S2), DSEE (W S1 + UV + S2). The results are collected in Table 6. We can see that: ¶ NO embedded sparsity in the pretrained weights yields the overall best performance. This is intuitive since the valuable knowledge learned from massive pretraining data is intact and not removed. · Embedding sparsity into the pretrained weights harms only little to no performance. A similar conclusion is drawn by Chen et al. (2020) which validated that a pruned model can have a similar performance as the dense model. ¸ Using sparsity-embed efficient fine-tuning with the sparse pre-trained weights can also preserve performance, as well as achieve parameter efficiency.
4.3 ABLATION AND VISUALIZATION
Different methods to generate sparse matrix S2 In this section, we compare the performance of BERTBASE using different methods to generate the sparse matrix S2 and the corresponding space Ω. Except for the aforementioned matrix decomposition method, we try to (1) randomly sample indexes into Ω; and (2) directly select the indexes of elements of highest magnitude ofW into Ω. In Figure 2 we can see that using the matrix decomposition method has the highest metric overall. It is also noteworthy that sparse matrices generated by other methods will sometimes harm the performance.
Another factor is the number of non-zero elements inside S2. More non-zero elements in S2 reduce the parameter efficiency, while less non-zero elements increase the efficiency but may not be much beneficial to the accuracy or other metrics. Figure 2 also shows the relationship between number
of non-zero elements in S2 and the performance on fine-tuned model on SST-2. From the graph, we can see that using an N of 64 seems to have the most stable results compared to other choices. Another important piece of information we can derive from the figure is that a higher number of non-zero elements in the S2 does not guarantee better performance.
Ablation of # ranks. The rank of lowrank decomposition r is crucial to the transfer learning performance. A small r will result in lower representation ability, and a large r will bring more parameters and reduce the parameter efficiency. To find the best value, we conduct experiments with different ranks r on four datasets, SST-2, MNLI, CoLA, and STSB. The results are displayed in Figure 3. We add quadratic trend lines in the graph along with the discrete points to smooth
the results. We can draw two conclusions from the graphs: ¶ Overall, the final performance is positively correlated with the number of trainable parameters; however, on some tasks (MNLI, CoLA) higher number of parameters (namely, the r for U and V) will lead to lower performance. · With a sparse matrix embedded, the performance of the trained models can be improved within a range of trainable parameters. On SST-2 and CoLA, the applicable range seems to be 104.5 ∼ 106. On STS-B, our method can consistently outperform the performance of using low-rank decomposition only. For MNLI, the two methods behave similarly while our method is slightly better.
Ablation of sparsity. Although creating sparsity in pretrained weights W does not change the number of trainable parameters of models, the different levels of sparsity control the number of non-zero elements and thereby influence the representation ability. Therefore, we conduct experiments on BERT with different sparsity, ranging from 10% to 60%. The results are in Figure A5. From the figures, we can draw conclusions: ¶ DSEE out-performs magnitude pruning at low sparsity (< 50%) with respect to the performance on down-
stream tasks. · DSEE surpasses vanilla magnitude pruning with respect to the number of parameters. For each self-attention layer, using vanilla magnitude pruning needs to train all the weights, while DSEE only needs to update the three low-rank matrices.
Visualizations. Figure 4 shows the distribution of weight change. From the graph, we can see that most weights are located around 0. Such a phenomenon indicates that a natural sparsity exists within the update matrices, motivating us to explore sparse structures along with matrix decomposition.
5 CONCLUSION
This paper draws on the prior of sparsity and establishes the DSEE framework. It is the first attempt towards jointly optimizing both parameter efficiency of the fine-tuning process, and the resource efficiency of the fine-tuned model. On state-of-the-art large-scale language models (e.g., BERT, GPT, and DeBERTa) and across several of datasets, DSEE consistently demonstrates highly impressive parameter, training, and inference efficiency, in addition to preserving a competitive downstream transfer performance. Our future work targets extending DSEE to the finetuning of large-scale computer vision and/or multi-modal pre-trained models.
A1 MORE IMPLEMENTATION DETAILS
We report the setting of other hyperparameters of our experiments, such as learning rate, in Table A7. The device we used for experiments are various, including NVIDIA GeForce GTX 1080 Ti, GeForce RTX 2080 Ti, Titan RTX, and A6000.
Architecture Method Parameters Dataset MNLI QNLI QQP SST-2 CoLA MRPC RTE STS-B
BERTBASE Fine-tune Learning Rate 5e-5 BERTBASE DSEE (before pruning) Learning Rate 5e-5 5e-5 5e-5 2e-4 1e-3 8e-4 1e-3 8e-4 BERTBASE DSEE (after pruning) Learning Rate 5e-5 5e-5 5e-5 5e-5 1e-3 5e-4 5e-4 5e-4
DeBERTa-large LoRA & DSEE (before pruning) Learning Rate 1e-5 - - - 1e-3 8e-4 8e-5 - DeBERTa-large DSEE (after pruning) Learning Rate 1e-5 - - - 5e-5 8e-4 6e-5 -
Table A7: Hyper-parameters we used on different datasets.
GreBsmo Algorithm GreBsmo (Zhou & Tao, 2013) is an algorithm for solving the Robust PCAlike methods. The optimization of U , V , and S follows the following iterative rules:
Uk = Q,QR ( (X − Sk−1)V Tk−1 ) = QR Vk = Q T (X − Sk−1)
Sk = Sλ (X − UkVk) , (2)
where X is the original dense matrix, QR(·) means the QR decomposition, Sλ(·) indicates the soft-threshold operation, and the subscripts k indicates the optimization step.
A2 MORE EXPERIMENT RESULTS
Ablation of Sparsity To study the relationship between sparsity of unstructured pruning and the behavior of our DSEE, we conduct an ablation study on various datasets in GLUE benchmarks. The results are shown in Figure A5
0.880
0.885
0.890
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE0.880
0.885
0.890
20 40 60 Sparsity
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE
ST SB
89.5
90.0
90.5
91.0
91.5
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
89.5
90.0
90.5
91.0
91.5
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
SS T2
83
84
85
86
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
83
84
85
86
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M R PC
67 68 69 70 71 72 73
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
67
68
69
70
71
72
73
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
R T E
80
81
82
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
80
81
82
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M N L I
0.52
0.56
0.60
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
C or
re la
tio n
Methods Magnitude Pruning DSEE0.52
0.56
0.60
20 40 60 Sparsity
C or
re la
tio n
Methods Magnitude Pruning DSEE
C oL A
Figure A5: DSEE performance compared to vanilla magnitude pruning at different sparsity. Magnitude Pruning: vanilla magnitude pruning which tunes W directly.
Results with more runs. We conduct experiments with three runs. The mean and standard deviation of accuracy of LoRA and DSEE (without pruning) are shown in Table A8. We have also included the p-value of t-tests to show the significance of the performance gaps. On half of the
A14
datasets, the p-value is around 0.1; and on three datasets the p-value is between 0.2 and 0.4. The reason why p-value is large on some datasets is probably because the number of experiments still has room to increase.
Table A8: Performance comparison of different methods on BERTBASE on GLUE benchmarks. The first row of each method indicates the mean accuracy, and the second row indicates the standard deviation.
Methods # Trainable Sparsity in DatasetParameters Pretrained Weights CoLA STS-B MNLI QQP QNLI MRPC RTE SST-2
LoRA 589.8K 0% 57.38 88.81 80.88 86.45 88.49 86.03 71.00 92.171.43 0.23 0.41 0.14 0.10 1.49 1.10 0.27
DSEE 592.9K 0% 58.73 89.16 81.03 86.53 88.58 85.78 71.12 92.550.95 0.03 0.62 0.06 0.17 1.07 0.62 0.12
P-value of matched pair t-tests 0.086 0.054 0.383 0.121 0.276 0.572 0.277 0.031
DSEE 592.9K 50% 55.60 88.39 81.16 87.23 88.88 84.88 70.09 91.000.81 0.21 0.23 0.03 0.30 0.93 1.37 0.13
Inference time on BERTBASE We have recorded the inference time of BERTBASE on various GLUE benchmarks, QQP, STS-B, CoLA and RTE. The number of samples in their evaluation set range from 277 to 40430, so they can be considered representative. The results are shown in Table A9. From the table we can see that, the inference time are greatly saved after using the structured version of DSEE.
Table A9: Inference time of different methods on BERTBASE on GLUE benchmarks. The sparsity with star signs indicates that it is a structured sparsity.
Dataset # eval samples MethodsLoRA DSEE (25%∗) DSEE (33%∗)
QQP 40,430 50.773 36.391 34.946 STS-B 1,500 2.033 1.506 1.337 CoLA 1,043 1.316 0.931 0.877 RTE 277 0.464 0.420 0.376
Convergence Speed We have compared the convergence speed (i.e., how the test accuracy changes) of LoRA and DSEE. From Figure A6 we can see that the convergence speeds of the two methods are not significantly different. This means introducing a sparse component into the decomposition form does not affect the convergence dynamics.
0.75
0.80
0.85
0.90
0 50 100 Evaluation Step
Te st
A cc
ur ac
y
Method
DSEE
LoRA
Figure A6: Convergence speed of two methods (LoRA and DSEE). The x-axis represents evaluation steps and y-axis represents the test accuracy.
A15 | 1. What is the focus and contribution of the paper regarding parameter and resource efficiency?
2. What are the strengths of the proposed method, particularly in its ability to achieve both goals simultaneously?
3. What are the weaknesses of the paper, especially regarding the evaluation and experimental results?
4. Do you have any concerns about the application of S1 and the choice of unstructured or structured sparsity?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes a principled way for achieving both parameter efficient finetuning and resource efficient inference. They combines the two steps of (i) computing a low-rank decomposition of parameter updates with a sparsity matrix (ii) further prune the trained (ie., finetuned) model. The first step aimed to achieve the parameter efficient finetuning but w/ the sparse residual components based on sparsity from the pretrained model to preserve the performance. The second step can reduce the memory of parameter storage. On three benchmarks with three different model architectures such as encoder-only, encoder-decoder they a large number of experiments with ablation studies. Studies show several a number of promising enhancement over other baselines.
Review
This paper presents an interesting direction of achieving two different goals simultaneously. Below is my comments on this.
Strength:
Clearly defines the problem, goal and most part of the method
Although not fully novel to scratch, the present an interesting idea of achieving both parameter and resource efficiency. They uniquely studied the problem of this and presented a way of making it work.
Multiple solid experimental results with a diverse set of models, benchmarks, and metrics showing the generalization of their approach.
Weakness:
My major concern is the evaluation. It is difficult to combine the experiments to support the claims. The gains in performance are marginal in compare to LORA, where LORA is a simpler method. With the S1 matric in Table 6, it even became worse in comparison to LORA or the finetuned model. The main claim is to excel for the large models but as found, with large models like DeBERTA, the gains in performance is even lower. The FLOPs results are extremely insufficient to stand as an evidence (only on SST-B are reported).
It's not described why S1 is only applied to pretrained weights W
Unclear when to use unstructured or structured sparsity.
Definitely just having a small reduction in memory usage does not mean to a significant resource utilization as it may lead to performance drop or may take an increased amount to training time. Analogously here, searching the sparsity also needs addition overhead (e.g., more epoch to tune UV after pruning etc.) the corresponding gain in performance is incremental specially when compare to the baseline LORA approach.
Typos and tips:
1/3 heads from each attention head -> 1/3 heads from each attention leyer.
reset the values of S2 back to 0. During the update, we only update the elements of S2 whose indexes are in Ω -> this should be in correspondence to Algo-2.
More details how UV is computed without seeing pretrained weights W should be discussed (not just referring previous works)
FLOs on BERT -> FLOPs on BERT |
ICLR | Title
DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models
Abstract
Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many finetuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resourceand parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning by enforcing sparsity-aware weight updates on top of the pretrained weights; and (ii) resource-efficient inference by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structural sparse patterns in pre-trained language models via magnitude-based pruning and `1 sparse regularization. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, GPT-2, and DeBERTa) on dozens of datasets, consistently demonstrate highly impressive parameter-/training-/inference-efficiency, while maintaining competitive downstream transfer performance. For instance, our DSEE-BERT obtains about 35% inference FLOPs savings with < 1% trainable parameters and comparable performance to conventional fine-tuning. 1
1 INTRODUCTION
Most recent NLP applications have been following the pre-train then fine-tune paradigm, where we start from a gigantic pre-trained model and fine-tune it towards downstream tasks. Conventional fine-tuning works by updating all of the parameters of the pre-trained model. As the size of pretrained models grows, updating all parameters becomes less feasible in most practical scenarios, due to the expensive memory and computational requirements. For example, BERTBASE (Devlin et al., 2019) has 110M trainable parameters, while GPT-2 (Radford et al., 2019) has up to 1.5B and the largest version of GPT-3 (Radford et al., 2019) has an astonishing 175B trainable parameters. As such, conventional fine-tuning of the larger models could require hundreds of GPU hours. Another downside of this paradigm is that it requires storing as many parameters, as in the large-scale pretrained models, for each downstream task. That poses impediments to the deployment in real-world resource-constrained environments.
One solution to address the extensive resource requirement of conventional fine-tuning is model pruning (LeCun et al., 1990; Han et al., 2015a; Ren et al., 2018; He et al., 2017; Liu et al., 2017), where unnecessary weights are eliminated to shrink the model size. For example, Chen et al. (2021b) leverages `1 regularization to remove insignificant attention heads and gains 35% ∼ 45% training time with comparable performance. Guo et al. (2020) learns sparse task-specific “diff” vectors for various downstream fine-tuning tasks, leading to great memory savings. All these studies indicate the rise of sparsity naturally during fine-tuning a general-purpose pre-trained model,
1All codes are provided in the supplement.
to some specialized downstream functionality. One potential interpretation, of why sparsity arises, is that different subsets of the parameters may be responsible for different downstream tasks and data domains (Sanh et al., 2020). However, identifying appropriate sparse pruning masks requires burdensome training of models with full weights, which can still be unaffordable for many practitioners. For example, fine-tuning a large pre-trained language model like GPT-3 for just one step consumes at least 1.2TB of VRAM and requires 96 pieces of NVIDIA Tesla (Hu et al., 2021).
One parallel alternative is designing parameter-efficient fine-tuning algorithms, which aims to only optimize a small portion of weights while fixing most of the pre-trained weights during the downstream task training step. Pioneering works along this line, which utilize adapters (Houlsby et al., 2019) or low-rank decomposition (Hu et al., 2021), can significantly reduce the number of trainable parameters while preserving good fine-tuning performance. Introducing only a small number of task-specific parameters for each new downstream task can substantially improve the memory and deployment efficiency of models as it allows us to reuse the remaining unchanged/shared parameters. However, there are two major hurdles of current parameter-efficient fine-tuning: (i) it does not yield any inference efficiency gains since the full pre-trained weights are still required to calculate outputs; and (ii) current methods assume the weight update to be either sparse (Guo et al., 2020) or low-rank (Hu et al., 2021), yet those assumptions might be oversimplified and overly restricted to allow for effective updates. For example, the low-rank assumptions on weight matrices might be overly strong since some weights cannot be fitted in the low-rank space (Yu et al., 2017). Recently Chen et al. (2021a) also find that using both sparse and low-rank components performs better than either of them. These observations have inspired us to explore better parameter-efficiency methods.
To tackle both resource- and parameter-efficiency issues of large model fine-tuning, we explicitly draw on the prior of sparsity for both weight updates and the final weights, and establish a dually sparsity-embedding efficient tuning (DSEE) framework. From a pre-trained model, DSEE first adopts a sparsity-aware low-rank weight update to achieve parameter efficiency of the fine-tuning process; and then enforces a sparse weight structure by masking to achieve resource efficiency of the fine-tuned model at inference time. Our contributions can be summarized as follows:
• We propose the dually sparsity-embedding efficient tuning (DSEE), which unifies sparsityaware weight update and sparse pretrained weight in fine-tuning gigantic pre-trained models. It is the first attempt towards jointly optimizing both parameter efficiency of the finetuning process, and the resource efficiency of the fine-tuned model.
• We exploit both unstructured and structured sparse patterns in the DSEE framework. For weight updates, we find the injected sparsity prior to greatly enhance existing parameterefficient update schemes, and to further trim down the needed amount of trainable parameters. For the final weights, we learn well-performed sparse masks from the pre-trained weights, leading to substantial computation and memory reductions at inference.
• Extensive experiments demonstrate the effectiveness of our proposal across various representative pre-trained languages models, such as BERT, GPT-2, and DeBERTa; and on diverse evaluation benchmarks, such as E2E, DART, WebNLG, and GLUE. Specifically, on (E2E, Dart, WebNLG), our methods can achieve a BLUE score of (69.75, 55.40, 46.66) with less than 1% of total trainable parameters. On BERT, our method can save about 35% FLOPs, compared to traditional downstream fine-tuning.
2 RELATED WORK
Pre-trained language models. Transformer (Vaswani et al., 2017) is a sequence-to-sequence model that heavily uses the concept of self-attention. Later on, numerous transformer-based models have been proposed and show overwhelming performance on natural language processing (NLP) and on computer vision (CV) tasks. Devlin et al. (2019) designed BERT that pre-train deep bidirectional representations from unlabeled text and reached powerful performance. Liu et al. (2019) found that BERT were terribly undertrained and proposed RoBERTa, an enhanced training recipe for BERT which can greatly boost the performance. He et al. (2020) proposed decoding-enhanced BERT with disentangled attention (DeBERTa) that incorporates the disentangled attention mechanism and an improved mask encoder to enhance BERT and RoBERTa. More variants like XL-Net, Albert, and Electra have also been proposed in recent years (Yang et al., 2019; Lan et al., 2019; Clark et al., 2019). The series of GPT models (Radford et al., 2019; Brown et al., 2020) are later developed based on transformers decoder blocks rather than encoder blocks like BERT, which again have
shown superior performance on different tasks. These large models pretrained on a large amount of unlabelled texts would need to be further fine-tuned on downstream tasks for better performance. One of the accompanying disadvantages of these pre-training models with tremendous parameter counts (e.g., 175B in GPT-3) is the unaffordable computational cost for further fine-tuning.
Pruning and Low-rank decomposition. Pruning is a widely-used model compression technique. It can reduce the number of parameters inside models, which possibly brings training and inference efficiency. Along with weight pruning method (Han et al., 2015b) being one of the most effective methods (Gordon et al., 2020), various criterion have been proposed to select insignificant weights for pruning, such as Taylor approximation (Molchanov et al., 2019), Hessian score approximation (Hassibi & Stork, 1993), and other saliency scores such as SNIP (Lee et al., 2018), GraSP (Wang et al., 2019) and SynFlow (Tanaka et al., 2020). Several pruning methods have been commonly adapted to compress language models (McCarley et al., 2019; Gordon et al., 2020; Sanh et al., 2020; Wang et al., 2020; Chen et al., 2021b). Specifically, McCarley et al. (2019) proposed to prune attention heads that had less contribution to the model. Wang et al. (2020) pruned BERT models by involving low-rank factorization and `0 regularization. Sanh et al. (2020) invented an improved version of magnitude pruning (i.e., pruning based on the weight change) that can better suit the transfer learning. Chen et al. (2021b) performed structured pruning on BERT via `1 sparse regularization, which reduced a large portion of parameters and decreased the training cost.
Low-rank approximation (Ye, 2005) is also vastly studied. One classical scenario is robust principle component analysis (Candès et al., 2011), which decomposes a matrix into a low-rank plus a sparse component. Existing literature shows that in deep learning, the learned over-parameterized models often naturally bear approximate low-rank weight structures (Oymak et al., 2019; Yu et al., 2017). Some (Jaderberg et al., 2014; Povey et al., 2018; Sainath et al., 2013; Zhang et al., 2014; Zhao et al., 2016) have explicitly imposed the low-rank constraint during training. Wang et al. (2020); Hu et al. (2021) utilized low-rank decomposition to shrink the model size and trim down the trainable parameters during fine-tuning. However, to our best knowledge, integrating sparsity and low-rank structures has never been studied before for efficient fine-tuning of pre-trained language models.
Parameter-efficient adaptation. Parameter-efficient adaptation aims at reducing the number of trainable parameters when fine-tuning the models across different downstream domains. Unlike pruning, it generates updates that can be represented by fewer parameters instead of building sparse models. Various approaches are invented to achieve the goal. Rebuffi et al. (2017); Houlsby et al. (2019) inserted and only trained adapters between existing layers, whose parameters are much less compared to the pretrained models. Guo et al. (2020) leveraged `0 regularization to limit the number of non-zero elements in the update vectors. Lester et al. (2021); Li & Liang (2021) introduced efficient prompt tuning which optimizes only a small continuous task-specific vector. Hu et al. (2021) proposed a low-rank decomposition-based method that can also significantly reduce the number of trainable parameters. However, fine-tuned models yielded by these methods work have the same amount of weights as the pre-trained starting point; hence they contribute no resource efficiency of the final model.
3 METHODOLOGY
In this section, we begin by describing our notations and definitions of sparsity generation and parameter-efficient fine-tuning in Section 3.1. Then, we introduce the (dually) sparsity-embedded efficient fine-tuning algorithms in Sections 3.2 and 3.3.
3.1 PRELIMINARIES
Sparsity generation and resource-efficient fine-tuning. We adopt both unstructured and structured pruning methods to produce sparsity. They can lead to resource-efficiency including memory and computation savings.
Let W ∈ Rm×n denote a weight matrix. The goal of pruning is to find a binary mask S ∈ {0, 1}‖W‖0 , where ‖W‖0 is the number of parameters in W . The mask S is applied to W and results in a sparse weight W S. For unstructured pruning, only memory cost is saved; but for structured pruning, it helps save computational cost since the sparse weights can be smaller in size by wiping out all-zero columns or rows. However, the performance of networks after structured pruning is often shown to be inferior compared with the unstructured pruning counterpart.
Parameter-efficient fine-tuning. To leverage the knowledge in pre-trained weights W , downstream models learn task-specific weight update ∆W via fine-tuning and generate predictions with weights W + ∆W . The output of models is therefore calculated as (W + ∆W)x where x is the input. Since ∆W has the same size of W , learning the update matrices usually requires massive resources as the size of the pre-trained model increases. Parameter-efficient fine-tuning try to solve this problem by using as few trainable parameters as possible to represent ∆W , while maintaining competitive downstream fine-tuning performance. Previous literature reaches the goal via either sparsifying weight update matrices ∆W (Guo et al., 2020) or leveraging low-rank decomposed matrices to compute ∆W (Hu et al., 2021), while in our work we combine both of them.
Algorithm 1: Sparsity-Embedded LowRank Decomposition Input: Pretrained weightsW , number of non-zero elements N Output: Sparse matrices S2 1 Initialize S2 to be an empty set. 2 for each self-attention projection weights wi inW do /* Decomposition */ 3 Perform matrix decomposition: wi ≈ UV + S ′ by solving the optimization problem in Eqn.1. /* Identify important elements to form S2 */ 4 Perform thresholding on S ′: Keep N
elements in S ′ with top magnitudes, and set the rest 0.
5 Append S ′ into S2. 6 end
Algorithm 2: DSEE Input: Pretrained weightsW , number of
non-zero elements N , desired sparsity s, loss function L
Output: Sparse mask S1, matrices U ,V,S2 1 DecomposeW into U ,V and S2 2 (Re-)Initialization: U = 0,V ∼ N (0, 0.02),Ω = indexes of non-zero elements in S2,S = 0 /* I: train before pruning */ 3 Train U ,V,S with respect to L under the constraint of PΩC (S) = 0. /* II: pruning the model */ 4 Prune (1-s%) parameters inW globally by sorting the magnitude ofW + UV + S, deriving the sparsity mask S1 /* III: tuning after pruning */ 5 Tune U ,V,S2 for E epochs for recovering the performance.
3.2 SPARSITY-EMBEDDED PARAMETER-EFFICIENT FINE-TUNING
A recent study (Hu et al., 2021) enforces low-rank constraint to weight update tensors ∆W , and obtains a satisfactory trade-off between parameter-efficiency and model quality. However, as revealed experimentally by (Yu et al., 2017), a part of the important information in the trained weights will also scatter outside the low-rank subspace, creating sparse “residuals”. Inspired by these observations, we investigate a new sparsity-aware low-rank subspace of ∆W , and introduce the first component of our proposal in Figure 1, i.e., sparsity-embedded parameter-efficient fine-tuning.
Specifically, we identify a sparse subspace Ω, that we can project our update matrices ∆W to. The update matrix is then decomposed into two components: a low-rank part which is represented by the multiplication of two low-rank matrices U ∈ Rm×r and V ∈ Rr×n, and a sparse residual
S2 = PΩ(∆W), where PΩ(∆W) =
{ si,j , (i, j) ∈ Ω
0, (i, j) ∈ ΩC , i = 1, 2, . . . ,m, j = 1, 2, . . . , n.
As illustrated in Figure 1, the update matrix ∆W can be approximated by UV + S2, in which the U , V , and the non-zero element of S2 are learnable parameters while Ω is fixed once determined. Compared to the full fine-tuning schemes has m × n individual trainable parameters, our method only has (m+n)×r+card(Ω). If r is smaller than m×n−card(Ω)m+n / 0.5 min{m,n}, which is a loose bound, our method is capable of substantially reducing trainable parameters for downstream finetuning. In practice, the value of r is very small compared tom and n so the savings are considerable.
The next question is how to find appropriate Ω. Motivated by (Candès et al., 2011), we formulate our sparsity-aware decomposition as the following optimization problem:
min U,V,S2
1 2 ‖W − UV − S2‖2F
s.t. rank(U) ≤ r, rank(V) ≤ r, card(S2) ≤ c, (1)
where rank(·) indicates the rank and card(·) indicates the cardinality of a matrix. Here we derive Ω from pre-trained weights W , which is different from previous low-rank techniques for model compression since we perform prior-training decomposition and can not access ∆W before finetuning. In this way, we actually assume that ∆W shares a similar crucial subspace Ω with W (Sun et al., 2018). We use the GreBsmo algorithm (Zhou & Tao, 2013) to solve this optimization problem. This method adopts a similar idea as random projection and can solve the optimization problem rapidly. The details of the algorithm are in Appendix.
Algorithm 1 summarizes the detailed procedure of our proposal. We first decompose the pretrained weightsW into three matrices: U , V and S2. U and V are matrices of low-rank and S2 is a sparse matrix. After the decomposition, we re-initialize U and V to be of shape Rm×r and Rr×n. The initial values for U are set to 0, and the initial values for V will follow a normal distribution as Hu et al. (2021) did. We do not use the decomposed value inside S2, but only the index of non-zero elements. We collect the indexes of non-zero elements into Ω, and reset the values of S2 back to 0. During the training, we only update the elements of S2 whose indexes are in Ω, as well as U and V .
3.3 DUALLY SPARSITY-EMBEDDED EFFICIENT TUNING (DSEE)
Besides parameter-efficiency, resource-efficiency is another major goal of our proposed framework. Sparsity-embedded low-rank weight updates alone do not necessarily lead to computational cost reductions since it works together with the dense pre-trained weights and the massive computation during forwarding processes is inevitable. To achieve both resource- and parameter-efficiency, we also promote a sparse weight structure to the final fine-tuned model, as demonstrated by the sparse mask S1 in Figure 1. Our DSEE explores both unstructured/structured sparsity patterns as follows. B Pruning with unstructured sparse masks. The unstructured sparse mask is the most widely used mask since it usually produces almost undamaged performance compared to its dense counterpart (Han et al., 2015a). It applies fine-grained manipulation on each individual model weight, while the generated irregular sparse masks bring limited hardware speedup (Wen et al., 2016). Specifically, based on the weight matrixW , a sparse mask S1 can be calculated with various approaches, either heuristic or optimization-based. We adopt the one-shot magnitude pruning (Han et al., 2015a) in DSEE, due to its simplicity and competitive performance.
In our context, we create the sparse pattern S1 from W + UV: First, it sorts the magnitude (i.e., absolute value) of individual weights and removes parameters with bottom 1 − k% magnitude. Second, we further tune the value of the low-rank update matrices for a few epochs, which is important to keep the performance as stated in Han et al. (2015b). Third, the sparse mask S1 that we derive will be applied on the pretrained weights only and do not affect the output from the update matrices, as W S1 + UV . For an input sample x, the output is calculated by (W S1 + UV + S2)x = W S1x + (UV + S2)x, where the first term W S1x brings significant resource-efficiency, and the second term brings parameter-efficiency.
B Pruning with structured sparse masks. The second method exploits structured sparse masks, whose regular patterns are more hardware friendly yet have worse performance than unstructured masks. Our DSEE considers `1 sparse regularization (Liu et al., 2017; Chen et al., 2021b) to craft high-quality structurally pruned subnetworks. More precisely, motivated by Chen et al. (2021b), we introduce learnable coefficients ξ before each attention head module, and append a `1 sparse penalty on ξ to the original loss. Detailed formulations are depicted as below:
We add trainable coefficients c before attention heads. The parameters c are optimized together with the decomposed matrices, i.e., U ,V and S2. An extra term λ‖c‖1 will be added to the training loss for sparse regularization. The value of λ is set to 1e-4. After training, we prune the attention heads that have the lowest contribution to the model (i.e., lowest c). We use a layer-wise pruning scheme that prunes the same proportion of heads in each attention layer. After pruning, several epochs of tuning are run to recover the performance of the pruned model. The size of update matrices will change after structured pruning, so we also need to change the dimension of U and V . Specifically, we change the size of V from Rr×n to Rr×[n×s] where s is the pruning ratio of the corresponding self-attention layer and [x] is the biggest integer not greater than x. The size of S2 is shrunk accordingly.
4 EXPERIMENT RESULTS
Datasets and models. We use three classical pre-trained language models in our experiments: BERTBASE (Devlin et al., 2019), GPT-2 (Radford et al., 2019), and DeBERTa-large (He et al., 2020), which have 12/24/24 layers with hidden size of 768/1024/1024 and 110/354/380M trainable parameters, respectively. For BERT and DeBERTa, we evaluate our method on the GLUE benchmarks (Wang et al., 2018). For GPT-2, we use E2E (Novikova et al., 2017), WebNLG (Gardent et al., 2017), and DART (Nan et al., 2021) for evaluations.
Training and evaluation details. For BERT and DeBERTa, we follow the default settings in Wolf et al. (2019); Devlin et al. (2019). We use the AdamW (Loshchilov & Hutter, 2017) optimizer for downstream training. The batch size for BERT and DeBERTa is 32, 8 per GPU. We train three epochs to search the sparse mask S1, and continue to tune the model for three epochs to converge. The initial learning rates are reported in A1, and we linearly decay them. For GPT-2, we follow the same hyperparameters as in Hu et al. (2021). We train the model for five epochs to search for the mask, and further tune the pruned model for two epochs.
Evaluation Metrics. For tasks on BERT and DeBERTa, we use the accuracy, Matthew’s Correlation, and Pearson’s r in the evaluation by default, which is also the conventional setting in the NLP community. On GPT-2, we use BLEU (Papineni et al., 2002), METEOR (Denkowski & Lavie, 2014), TER (Snover et al., 2006) and NIST (Doddington, 2002) as our evaluation metrics. To evaluate the efficiency of models, we use the number of trainable parameters to measure the parameter efficiency, use Sparsity in Pretrained Weights to measure the resource efficiency, and FLOPs to evaluate the computational efficiency. We add a star sign (∗) to indicate the structured sparsity (i.e., structurally prune the pretrained weights).
Baselines. On BERT and DeBERTa, we conduct comparisons with the following baseline methods: ¶ Fine-tune: directly fine-tunes the full model; · EarlyBERT (Chen et al., 2021b); ¸ BERT Tickets (Chen et al., 2020); ¹ OMP: prunes the fine-tuned weights by magnitude, and fine-tune afterwards; and º LoRA (Hu et al., 2021). We also report the Huggingface’s fine-tuning results.
On GPT-2, we conduct comprehensive comparisons with multiple baseline methods: ¶ Adapters (Houlsby et al., 2019): insert adapters after linear layers; · FT-Top2: fine-tune the top 2 layers only; ¸: Prefix: prefix tuning introduced by Li & Liang (2021); and ¹ LoRA: low-rank decomposition, which assumes ∆W = UV; most results are directly cited from Hu et al. (2021).
4.1 EFFICIENT TUNING WITH DSEE
Parameter-efficiency with sparse masks. To verify that using simple low-rank adaptation (i.e., LoRA) has limitations, we compare its performance with the performance of our sparsity-embedded efficient fine-tuning. Table 1 proves that on MNLI, SST-2, CoLA, and STS-B, the simple decomposition form shows unsatisfactory results compared to sparsity-embedded decomposition. Adding a small proportion of parameters (only 3072 trainable parameters) can bring a performance boost to the model. Specifically, under
rank 8, the metrics increase (0.69/0.13/0.008/0.003) on SST-2/MNLI/CoLA/STS-B, respectively. Moreover, using only approximately half of the number of trainable parameters, our method can achieve comparable parameters compared to the state-of-the-art method LoRA.
Table 2 shows the performance on GPT-2. For all three tasks, our method can achieve comparable results with only about half of the trainable parameters with LoRA with rank four and make substantial performance gain compared to LoRA with rank two. On WebNLG, our method even achieves a higher
BLEU score, 55.56 versus 55.29, with half of the trainable parameters. Such a phenomenon indicates that adding a sparse matrix to the low-rank decomposition can make substantial improvement.
Resource- and parameter-efficiency with sparse masks. We report the performance of DSEE, including both the number of trainable parameters and the sparsity in pretrained weights. We use an unstructured sparsity of 50%, and structured sparsity of 25% and 33%, which is equal to pruning 1/4 and 1/3 heads from each attention layer. For BERTBASE and DeBERTa-large, we set the r (the low-rank dimension) to be 16, and N (the number of non-zero elements in S2 ) to be 64. We also prune each of the intermediate layers using a structured sparsity of 40% as in (Chen et al., 2021b). For GPT-2, we set r to be 2 in DSEE and 4 in LoRA. The choice of N remains 64.
Table 3 summarizes the performance on BERTBASE. On BERTBASE, our DSEE method can: ¶ achieve parameter-efficiency at high level and retain the performance on various downstream tasks. On BERTBASE, with only about 600K trainable parameters (110/0.5929 ≈ 200× smaller than the full model), DSEE can achieve comparable performance on all GLUE tasks. · DSEE can achieve resource-efficiency in the final models. Using either unstructured or structured sparse masks can reach our goal of reducing the number of parameters in the pretrained weights without sacrificing much performance. At the sparsity level of 50%, unstructured DSEE can achieve an accuracy of 90.46% on QNLI, surpassing the baseline of fine-tuning, whose result is 90.15%. At the sparsity level of 25%, the structured DSEE can have performance gains ranging from 0.46% to 2.22%, except for QQP and QNLI. At the sparsity level of 33%, the performance gains range from 0.11% to 2.04%, except for QQP and QNLI. ¸ Compared to other baselines, e.g., EarlyBERT and BERT Tickets, our method benefits from parameter efficiency, which updates the weights more efficiently. These observations validate the effectiveness of our DSEE method.
We also calculate the inference FLOPs of BERTBASE on STS-B dataset. The original BERTBASE on STS-B takes FLOPs of 3.7835 × 1014, while LoRA takes FLOPs of 3.8096 × 1014, which is 0.69% higher. Conversely, the structured version of DSEE takes a FLOPs of 2.4921 × 1014 at the sparsity of 25%, and 2.3867× 1014 at the sparsity of 33%, which is 34.61% and 37.38% lower than LoRA. This indicates that a large proportion of computational cost can be saved at inference phase.
Table 4 summarizes the performance on GPT-2, which shares a similar trend. Unstructured DSEE can achieve 2000× reduction in trainable parameters and 2× reduction in the final fine-tuned model size with almost no loss in downstream task performance compared to conventional finetuning. When compared to LoRA, the unstructured DSEE can retain the same level of the number of trainable parameters and downstream task performance, while showing a 2× reduction in the final finetuned model size. The structured DSEE on GPT-2 seems to be less competitive than on BERT, but it can still hold performance on E2E and WebNLG after pruning 25% of heads in attention modules.
Finally, we validate our method on DeBERTa. The results are displayed in Table 5. We use four datasets, CoLA, MNLI, MRPC, and RTE. They have greatly varied sizes, which are representatives of the GLUE benchmark. DeBERTa is a larger model, so applying low-rank decomposition with our
hyperparameters cannot match the performance of fine-tuning. Compared to LoRA, our method reaches higher performance on downstream tasks, albeit it needs slightly more training epochs. However, such a slight extra cost for searching the sparse mask S1 can be amortized by the efficiency gained in the future inference, since the burden on resources such as storing the pretrained weights is relieved.
4.2 UNDERSTANDING DSEE
The position of embedded sparsity masks plays a crucial role in our proposed DSEE. For instance, applying sparse masks to the pre-trained weights or weight updates produces resourceand parameter-efficiency. Precisely, we compare four different methods on BERTBASE:
one-shot magnitude weight pruning (W S1), two DSEE’s variants (W S1 + UV) and W + UV + S2), DSEE (W S1 + UV + S2). The results are collected in Table 6. We can see that: ¶ NO embedded sparsity in the pretrained weights yields the overall best performance. This is intuitive since the valuable knowledge learned from massive pretraining data is intact and not removed. · Embedding sparsity into the pretrained weights harms only little to no performance. A similar conclusion is drawn by Chen et al. (2020) which validated that a pruned model can have a similar performance as the dense model. ¸ Using sparsity-embed efficient fine-tuning with the sparse pre-trained weights can also preserve performance, as well as achieve parameter efficiency.
4.3 ABLATION AND VISUALIZATION
Different methods to generate sparse matrix S2 In this section, we compare the performance of BERTBASE using different methods to generate the sparse matrix S2 and the corresponding space Ω. Except for the aforementioned matrix decomposition method, we try to (1) randomly sample indexes into Ω; and (2) directly select the indexes of elements of highest magnitude ofW into Ω. In Figure 2 we can see that using the matrix decomposition method has the highest metric overall. It is also noteworthy that sparse matrices generated by other methods will sometimes harm the performance.
Another factor is the number of non-zero elements inside S2. More non-zero elements in S2 reduce the parameter efficiency, while less non-zero elements increase the efficiency but may not be much beneficial to the accuracy or other metrics. Figure 2 also shows the relationship between number
of non-zero elements in S2 and the performance on fine-tuned model on SST-2. From the graph, we can see that using an N of 64 seems to have the most stable results compared to other choices. Another important piece of information we can derive from the figure is that a higher number of non-zero elements in the S2 does not guarantee better performance.
Ablation of # ranks. The rank of lowrank decomposition r is crucial to the transfer learning performance. A small r will result in lower representation ability, and a large r will bring more parameters and reduce the parameter efficiency. To find the best value, we conduct experiments with different ranks r on four datasets, SST-2, MNLI, CoLA, and STSB. The results are displayed in Figure 3. We add quadratic trend lines in the graph along with the discrete points to smooth
the results. We can draw two conclusions from the graphs: ¶ Overall, the final performance is positively correlated with the number of trainable parameters; however, on some tasks (MNLI, CoLA) higher number of parameters (namely, the r for U and V) will lead to lower performance. · With a sparse matrix embedded, the performance of the trained models can be improved within a range of trainable parameters. On SST-2 and CoLA, the applicable range seems to be 104.5 ∼ 106. On STS-B, our method can consistently outperform the performance of using low-rank decomposition only. For MNLI, the two methods behave similarly while our method is slightly better.
Ablation of sparsity. Although creating sparsity in pretrained weights W does not change the number of trainable parameters of models, the different levels of sparsity control the number of non-zero elements and thereby influence the representation ability. Therefore, we conduct experiments on BERT with different sparsity, ranging from 10% to 60%. The results are in Figure A5. From the figures, we can draw conclusions: ¶ DSEE out-performs magnitude pruning at low sparsity (< 50%) with respect to the performance on down-
stream tasks. · DSEE surpasses vanilla magnitude pruning with respect to the number of parameters. For each self-attention layer, using vanilla magnitude pruning needs to train all the weights, while DSEE only needs to update the three low-rank matrices.
Visualizations. Figure 4 shows the distribution of weight change. From the graph, we can see that most weights are located around 0. Such a phenomenon indicates that a natural sparsity exists within the update matrices, motivating us to explore sparse structures along with matrix decomposition.
5 CONCLUSION
This paper draws on the prior of sparsity and establishes the DSEE framework. It is the first attempt towards jointly optimizing both parameter efficiency of the fine-tuning process, and the resource efficiency of the fine-tuned model. On state-of-the-art large-scale language models (e.g., BERT, GPT, and DeBERTa) and across several of datasets, DSEE consistently demonstrates highly impressive parameter, training, and inference efficiency, in addition to preserving a competitive downstream transfer performance. Our future work targets extending DSEE to the finetuning of large-scale computer vision and/or multi-modal pre-trained models.
A1 MORE IMPLEMENTATION DETAILS
We report the setting of other hyperparameters of our experiments, such as learning rate, in Table A7. The device we used for experiments are various, including NVIDIA GeForce GTX 1080 Ti, GeForce RTX 2080 Ti, Titan RTX, and A6000.
Architecture Method Parameters Dataset MNLI QNLI QQP SST-2 CoLA MRPC RTE STS-B
BERTBASE Fine-tune Learning Rate 5e-5 BERTBASE DSEE (before pruning) Learning Rate 5e-5 5e-5 5e-5 2e-4 1e-3 8e-4 1e-3 8e-4 BERTBASE DSEE (after pruning) Learning Rate 5e-5 5e-5 5e-5 5e-5 1e-3 5e-4 5e-4 5e-4
DeBERTa-large LoRA & DSEE (before pruning) Learning Rate 1e-5 - - - 1e-3 8e-4 8e-5 - DeBERTa-large DSEE (after pruning) Learning Rate 1e-5 - - - 5e-5 8e-4 6e-5 -
Table A7: Hyper-parameters we used on different datasets.
GreBsmo Algorithm GreBsmo (Zhou & Tao, 2013) is an algorithm for solving the Robust PCAlike methods. The optimization of U , V , and S follows the following iterative rules:
Uk = Q,QR ( (X − Sk−1)V Tk−1 ) = QR Vk = Q T (X − Sk−1)
Sk = Sλ (X − UkVk) , (2)
where X is the original dense matrix, QR(·) means the QR decomposition, Sλ(·) indicates the soft-threshold operation, and the subscripts k indicates the optimization step.
A2 MORE EXPERIMENT RESULTS
Ablation of Sparsity To study the relationship between sparsity of unstructured pruning and the behavior of our DSEE, we conduct an ablation study on various datasets in GLUE benchmarks. The results are shown in Figure A5
0.880
0.885
0.890
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE0.880
0.885
0.890
20 40 60 Sparsity
Pe ar
so n'
s r
Methods Magnitude Pruning DSEE
ST SB
89.5
90.0
90.5
91.0
91.5
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
89.5
90.0
90.5
91.0
91.5
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
SS T2
83
84
85
86
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
83
84
85
86
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M R PC
67 68 69 70 71 72 73
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
67
68
69
70
71
72
73
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
R T E
80
81
82
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
A cc
ur ac
y
Methods Magnitude Pruning DSEE
80
81
82
20 40 60 Sparsity
A cc
ur ac
y
Methods Magnitude Pruning DSEE
M N L I
0.52
0.56
0.60
5.5 6.0 6.5 7.0 7.5 8.0 log10(#Trainable Parameters)
C or
re la
tio n
Methods Magnitude Pruning DSEE0.52
0.56
0.60
20 40 60 Sparsity
C or
re la
tio n
Methods Magnitude Pruning DSEE
C oL A
Figure A5: DSEE performance compared to vanilla magnitude pruning at different sparsity. Magnitude Pruning: vanilla magnitude pruning which tunes W directly.
Results with more runs. We conduct experiments with three runs. The mean and standard deviation of accuracy of LoRA and DSEE (without pruning) are shown in Table A8. We have also included the p-value of t-tests to show the significance of the performance gaps. On half of the
A14
datasets, the p-value is around 0.1; and on three datasets the p-value is between 0.2 and 0.4. The reason why p-value is large on some datasets is probably because the number of experiments still has room to increase.
Table A8: Performance comparison of different methods on BERTBASE on GLUE benchmarks. The first row of each method indicates the mean accuracy, and the second row indicates the standard deviation.
Methods # Trainable Sparsity in DatasetParameters Pretrained Weights CoLA STS-B MNLI QQP QNLI MRPC RTE SST-2
LoRA 589.8K 0% 57.38 88.81 80.88 86.45 88.49 86.03 71.00 92.171.43 0.23 0.41 0.14 0.10 1.49 1.10 0.27
DSEE 592.9K 0% 58.73 89.16 81.03 86.53 88.58 85.78 71.12 92.550.95 0.03 0.62 0.06 0.17 1.07 0.62 0.12
P-value of matched pair t-tests 0.086 0.054 0.383 0.121 0.276 0.572 0.277 0.031
DSEE 592.9K 50% 55.60 88.39 81.16 87.23 88.88 84.88 70.09 91.000.81 0.21 0.23 0.03 0.30 0.93 1.37 0.13
Inference time on BERTBASE We have recorded the inference time of BERTBASE on various GLUE benchmarks, QQP, STS-B, CoLA and RTE. The number of samples in their evaluation set range from 277 to 40430, so they can be considered representative. The results are shown in Table A9. From the table we can see that, the inference time are greatly saved after using the structured version of DSEE.
Table A9: Inference time of different methods on BERTBASE on GLUE benchmarks. The sparsity with star signs indicates that it is a structured sparsity.
Dataset # eval samples MethodsLoRA DSEE (25%∗) DSEE (33%∗)
QQP 40,430 50.773 36.391 34.946 STS-B 1,500 2.033 1.506 1.337 CoLA 1,043 1.316 0.931 0.877 RTE 277 0.464 0.420 0.376
Convergence Speed We have compared the convergence speed (i.e., how the test accuracy changes) of LoRA and DSEE. From Figure A6 we can see that the convergence speeds of the two methods are not significantly different. This means introducing a sparse component into the decomposition form does not affect the convergence dynamics.
0.75
0.80
0.85
0.90
0 50 100 Evaluation Step
Te st
A cc
ur ac
y
Method
DSEE
LoRA
Figure A6: Convergence speed of two methods (LoRA and DSEE). The x-axis represents evaluation steps and y-axis represents the test accuracy.
A15 | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths and weaknesses of the proposed method, particularly regarding its ability to combine weight pruning and parameter-efficient tuning?
3. Do you have any concerns about the necessity and effectiveness of adding a sparse matrix to LoRA?
4. How does the reviewer assess the significance and trustworthiness of the experimental results presented in the paper?
5. What are the limitations of the technical contributions of the paper, especially considering its reliance on existing methods?
6. Can the authors provide further explanation or justification for their design choices, such as incorporating unstructured and structured weight pruning techniques?
7. Are there any minor presentation issues or areas where the paper could benefit from additional information or clarification? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes to enforce weight sparsity constraints into a prior parameter-efficient tuning method (LoRA) to achieve both parameter-efficient tuning and resource-efficient inference. Specifically, this paper (1) adds another sparse, learnable weight matrix to the original low-rank decomposition in LoRA; (2) incorporates unstructured and structured weight pruning techniques to save computation at inference time. Experiments are conducted on GLUE and several data2text generation benchmarks to demonstrate the effectiveness of the proposed method.
Review
The strengths and weaknesses are detailed below:
Strengths:
Combining weight-pruning and parameter-efficient tuning is a valuable attempt to further improve resource efficiency especially at inference time
Experiments demonstrate the effectiveness of the proposed method. The analysis sections regarding DSEE are also appreciated.
Weaknesses:
(This is a minor sanity check question) On GLUE tasks, are the results from one run or several random runs? This detail seems not to be mentioned in the paper – it is common practice to reports results from several random runs since the GLUE results are known to be sensitive to initialization
The necessity of adding a sparse matrix (i.e. S2 in the paper) into LoRA is not well supported. The justification experimental results in Table 1 and Table 2 are not very convincing to me because (1) the improvement from “+S2” is very small in most cases (and sometimes worse), for example, I do not think 0.008/0.003 are meaningful differences to claim in the paper; without significance test, most of the small differences may not be trustable; (2) In Table 1 and Table 2, many results from UV of 2x sizes (i.e. 589.8K in Table 1 and 0.39M from Table 2) are better than UV + S2. I know that this is not a fair comparison because of different models sizes, however, I suggest the authors double the size of UV + S2 for a fair comparison instead of downsizing UV – on the scale of 589.8K or 0.39M parameters, the number of parameters is not really a practical concern anymore. For example, your method can tune 1M less parameters than others, but such a 1M parameter saving may not lead to any meaningful differences in most practical settings.
Therefore, the sparse S2 adds complexity to the method and requires additional algorithm 1 in the pipeline, yet it does not yield very effective improvement as argued above, thus I do not think this is a proper design.
Without considering the questionable contributions of S2 because of point 2, I think the remaining technical contributions of this paper are limited because it is a combination of the existing parameter-efficient tuning method (LoRA) and existing weight pruning methods. Such a combination is empirically valuable but technically trivial.
The improvement of DSEE over LoRA seems from sparsity in pretrained weights, it is better to explain this phenomenon in the paper why weight pruning leads to a significant performance boost.
There is a sentence above Section 4.2 “DeBERTa is a larger model so applying low-rank decomposition with our hyperparameters cannot reach the performance of fine-tuning.” This statement is misleading and contradicts the claims by other papers [1] [2], which conclude that a larger model typically requires a lower-dimensional parameter update space.
The are some minor presentation flaws:
(1) In the first paragraph of the introduction section, citations to related papers should be included such as when mentioning BERT and GPT2.
(2) Above section 4.2, the authors say that Table 5 uses SST-2 …. which are not present in Table 5
[1] Lester et al. The Power of Scale for Parameter-Efficient Prompt Tuning. EMNLP 2021
[2] Aghajanyan et al. Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning. ACL 2021 |
ICLR | Title
FEDERATED LEARNING FRAMEWORK BASED ON TRIMMED MEAN AGGREGATION RULES
Abstract
This paper studies the problem of information security in the distributed learning framework. In particular, we consider the clients will always be attacked by Byzantine nodes and poisoning in the federated learning. Typically, aggregation rules are utilized to protect the model from the attacks in federated learning. However, classical aggregation methods such as Krum(·) and Mean(·) are not capable enough to deal with Byzantine attacks in which general deviations and multiple clients are attacked at the same time. We propose new aggregation rules, Tmean(·), to the federated learning algorithm, and propose a federated learning framework based on Byzantine resilient aggregation algorithm. Our Tmean(·) rules are derived from Mean(·) by appropriately trimming some of the values before averaging them. Theoretically, we provide theoretical analysis and understanding of Tmean(·). Extensive experiments validate the effectiveness of our approaches.
1 INTRODUCTION
As one special case of distributed machine learning, federated learning (FL) draws increasing research attention recently. FL has become one promising approach to enable clients collaboratively to learn a shared model with the decentralized and private data on each client node. Thus it is of central importance to keep the node information secured in FL. Unfortunately, FL is very vulnerable to software/hardware errors and adversarial attacks; especially Byzantine attack from distributed systems has arose to be a key node attack sample for federated learning.
To ensure the FL model resistance to Byzantine attack, research focuses are made on how to introduce aggregation rules into the gradient information iteration process, and how to ensure that this aggregation rule can make the data robust. In particular, classical approaches to avoid the Byzantine failures would employ the state machine replication strategy (Alistarh et al., 2018), which can be roughly categorized into two ways in distributed machine learning: (1) the processes agree on a sample of data based on which the clients update their local parameter vectors; (2) the clients agree on how the parameter vector should be updated (Blanchard et al., 2017). The former ones demand transmitting data samples to each individual node, resulting in high costs. The latter ways are not reliable neither, as we cannot detect whether clients are trustworthy or not, from the mixed Byzantine vectors. The attacker can know any information about the process of FL, and can use any vectors to initiate the attack during the node’s information transmission (Yin et al., 2018). More specifically, the data between machines can be replaced by any value. This problem is not fully addressed in previous works.
When facing the attacks from Byzantine nodes, the FL models have to rely on robust aggregation rules to minimize the influence of the Byzantine attack on the data model (Bottou, 2010). For instance, Krum(·) is a strong aggregation rule designed to identify an honest node such that it can effectively prevent Byzantine attacks in most cases (Blanchard et al., 2017). However, if the Byzantine node attack is changed from the original single miner node attack to multiple server attacks, Krum(·) is not able to ensure the learning process robustness to noisy data. As another class of aggregation rules, simple mean-based aggregation rules can maintain learning robustness if it is not attacked by Byzantine nodes. However, the Byzantine attack will make the update direction of node gradient information largely deviate from the original function by using simple mean-based aggregation rules. Some variants of mean-based aggregation rules, such as geometric median (Blanchard
et al., 2017), marginal median (Alistarh et al., 2018), are the classical methods to solve the Byzantine node attack problem. But they are not much more robust than simple mean-based aggregation rules in the case of some large deviation Byzantine attacks. The main difficulty that prevents mean-based aggregation rules from robust is the unstable distribution of data (Jin et al., 2020). We find that the difficulty can be tackled, if the data is averaged by trimming a part of the data and then imported into the aggregation rules. This motivates our work in this paper.
Most of FL approaches are built upon the Stochastic Gradient Descent (SGD) algorithm (Castro et al., 1999) or its variants, and the statistical properties of SGD such as convergence are well developed. Our approach also employs the SGD; and the typical iterative process of SGD algorithm in distributed system is represented by Figure 1. First, a client, known as a miner node, estimates the gradient of the node, makes an estimate of the deviation between the estimated information and the ideal gradient information (El-Mhamdi et al., 2020), then passes this information to the server node in the network, and finally update the gradient there. So the miner node network transmits the information to the master server, and the master server then passes the gradient update information through a series of aggregation operations. When the aggregation conditions are met, the server transmits the information to the distribution network, and then transmitted to the miner nodes. This is a cyclic process, such that the gradient information is continuously updated in the entire network (Li et al., 2014).
In this paper, we mainly propose new aggregation rules, Tmean(·), by trimming part of the data before the average operation. We provide theoretical understandings of our aggregation rules, especially, why they are robust to the Byzantine attack. Through attack experiments and mathematical proofs, we have concluded that appropriately trimming the original data and averaging can make the model more robust from the decentralized data.
Specifically, in section 3 we introduce the federated learning Byzantine distributed system model, briefly describes the working principle of SGD, and summarizes the update iteration rules based on aggregation rules. Then, we present the concept of Byzantine resilience, and the conditions to satisfy the aggregation rules of Byzantine resilience. We provide concept of trimmed mean and the rigorous theoretical proof and understanding of Tmean(·) based aggregation rules in section 4. Then, we prove the convergence of the proposed federated learning aggregation rules in section 5. In section 6, we conduct experiments by Gaussian attack, omniscient attack and multiple servers attack. Under these attacks, Tmean(·)-based FL aggregation rules can still maintain robustness. These experiments thus validate the effectiveness of our approaches.
Contributions: (1) We present new aggregation rules, Tmean(·), to the Byzantine resilient federated learning algorithm, and propose federated learning frameworks based on aggregation algorithm. Our proposed approaches are shown to be robust to Byzantine attacks.
(2) We provide rigorous theoretical proof and understanding of our approaches and aggregation rules. To the best of our knowledge, these theorems and theoretical understandings are for the first time contributed to the community. Critically, we make convergence certificates and prove that Tmean(·) can converge in the general convex optimization setting. (3) Empirically, we demonstrate that the effectiveness of our approaches can make the FL model robust to Byzantine attack.
2 RELATED WORK
Federated Learning. Federated learning has become a prominent distributed learning paradigm. In (Jin et al., 2020), the authors proposed Stochastic-Sign SGD, a parameter estimation method with convergence guarantees. It uses a gradient compressor based on random symbols to unify that the gradient is updated in the framework. The FedAvg algorithm was first proposed by (Konečnỳ et al., 2016), and many scholars subsequently improved it based on this algorithm. (Karimireddy et al., 2020) pointed out that FedAvg can be improved by adding one additional parameters control variables to correct the client drift, and proved that FedAvg may be seriously affected by the gradient difference of different clients, and may even be slower than SGD.
Byzantine attack in FL. To make the secure transmission of the master server node, the FL models have to deal with the Byzantine attack. (Yin et al., 2018) showed that certain aggregation rules in the master server node can ensure the robustness of the data. (Blanchard et al., 2017) pointed out that Krum(·) can be used to make the robustness of data by computing the local sum of squared Euclidean distance to the other candidates, and outputting the one with minimal sum. When the gradient is updated to the saddle point, there is no guarantee that the SGD algorithm converges to the global optimum. Some scholars have proposed ByzantinePGD (Yin et al., 2019), which can escape saddle points and false local minimums, and can converge to an approximate true local minimum with low iteration complexity. Later, in the strong anti-Byzantine model, some scholars carried out poisoning attacks on Byzantine robust data, from the original data collection process to the subsequent information exchange process, and at the same time, these attacks were defended accordingly, in (Zhao et al., 2021). The research on the Byzantine structure model of federated learning has been expanded once again (Zhao et al., 2021).
Attack and Defence. In the framework of federated learning, miner nodes are usually attacked, such as Byzantine node attacks, poisoning attacks (Zhao et al., 2021), gradient leakage (Wei et al., 2020), etc. There exist various defense methods also, such as robust aggregation, secure aggregation, encryption, etc. Our paper mainly studies Byzantine node attacks. Byzantine node attacks can usually be summarized into two types: (1) To update gradient information of nodes, nodes are replaced by Byzantine nodes, and normal worker nodes cannot make judgments so that the gradient estimates deviate from the actual gradient update direction. (2) During the gradient process, the local node suffering from the interference of the Byzantine node, cannot reach a consensus with the master node, making the entire process unable to proceed normally (Lamport et al., 2019). Krum(·) is a popular aggregation rule to deal with these node attacks. We shall propose Tmean(·) as alternative aggregation rules.
3 PRELIMINARY AND PROBLEM SETUP
3.1 FEDERATED LEARNING SETTING
Federated learning was first proposed in (Konečnỳ et al., 2016), where the prevalent asynchronous SGD is used to update a global model in a distributed fashion. A pioneering work in this field proposed the currently most widely used algorithm, FedAvg (McMahan et al., 2017), which is also the first synchronous algorithm dedicated to federated setting. Recent studies attempt to expand federated learning with the aim of providing learning in more diverse and practical environments
Server: Initialize x0 ← rand(); for t = 0, 1, . . . , T do
Broadcast x(t) to all the workers; Wait until all the gradients ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m arrive; Compute G(t) = Aggr(ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m );
Update the parameter x(t+1) ← x(t) − γ(t)G(t); end
Worker: for t = 0, 1, . . . , T do
Receive x(t) from the server; Compute and send the local randomized gradient v(t) = ∇F (x(t), ξk) to the server;
end Algorithm 1: Distributed synchronous SGD with robust aggregation server.
such as multi-task learning, generative models, continual learning, and data with noisy labels. However, these algorithms may obtain suboptimal performance when miners participating in FL have non-independently and identical distributions (non-i.i.d.) (Zhao et al., 2018).
While the convergence of FedAvg on such settings was initially shown by experiments in (McMahan et al., 2017), it does not guarantee performance as good as that in an i.i.d. setting. These algorithms pointed out the issue have major limitations, such as privacy violation by partial global sharing of local data (Zhao et al., 2018) or no indication of improvement over baseline algorithms such as FedAvg (Hsieh et al., 2020). Our paper focuses on a general federated setting.
In order to better study the problem of aggregation in federated learning, we consider the following optimization problem:
min x∈Rd F (x), (1)
where F (x) = Eξ∼D[f(x; ξ)] is a smooth convex function, ξ is sampled from some unknown distribution D. Ideally, the problem (1) can be solved by the gradient descent method as
x(t+1) ← x(t) − γ(t)∇F (x(t)), (2) where γ(t) is the learning rate at t-th round. We assume that there exists at least one minimizer of F (x), denoted by x∗, that satisfies∇F (x∗) = 0. The problem (1) is solved in a distributed manner with m miner nodes, and up to q of them may be Byzantine nodes. The detailed algorithm of distributed synchronous SGD with aggregation rule Aggr(·) is shown in Algorithm 1. In each iteration, each miner samples n i.i.d. data points from the distribution D, and computes the gradient of the local empirical loss. Using a certain aggregation rule Aggr(·), the server collects and aggregates the gradients sent by the miners, and estimate the gradient through ∇F (x(t)) ≈ Aggr(v1, v2, . . . , vm). Without Byzantine failures, the k-th worker calculates v(t)k ∼ G(t), where G(t) = ∇f(x(t), ξ). When there exist Byzantine faults, v (t) k can be replaced by any arbitrary value ṽ(t)k , and hence
∇F (x(t)) ≈ Aggr(ṽ1, ṽ2, . . . , ṽm). (3)
3.2 BYZANTINE RESILIENCE
In the following we introduce the concept of Byzantine resilience. Suppose that in a specific iteration, the correct vectors v1, v2, . . . , vm are i.i.d. samples drawn from the random variable G = ∇f(x; ξk), where E[G] = g is an unbiased estimator of the gradient based on the current parameter x. Thus, E[vk] = E[G] = g for k ∈ {1, 2, . . . ,m}. We simplify the notations by omitting the index of iteration t. The following Definition 1 defines the concept of ∆-Byzantine resilience. For simplicity, we sometimes use Byzantine resilient in short when the concrete value of ∆ is either clear from the context or not important.
Definition 1 (Byzantine Resilience). Suppose that 0 ≤ q ≤ m, and ∆ is a given positive number. Let v1, v2, . . . , vm be i.i.d. random vectors in Rd, where vk ∼ G with E[G] = g. Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, such that at least m− q of which are equal to the corresponding vk’s while the rest are arbitrary vectors in Rd. An aggregation rule Aggr(·) is said to be ∆-Byzantine resilient if
E [ ‖Aggr(ṽ1, ṽ2, . . . , ṽm)− g‖2 ] ≤ ∆.
4 FL FRAMEWORK WITH ROBUST AGGREGATION
Nowadays, the aggregation rules based on the FL framework are mainly based on Krum(·) and Mean(·). They are usually used in the defense of Byzantine node attacks. However, the aggregation rules based on Mean(·) often cannot exhibit strong robustness, if it is affected by Byzantine nodes. The attack makes the gradient estimation direction deviate too far from the actual gradient update direction; and it cannot guarantee the robustness of the data. Thus we shall propose trimmed meanbased aggregation rules to resolve this issue.
4.1 KRUM
The Krum rule is a strong aggregation rule that attempts to identify an honest computing node, and discards the data of other computing nodes. The data from the identified honest computing node is used in the next round of algorithm. The selection strategy is to find one whose data is closest to that on other computing nodes. In other words, it computes the local sum of squared Euclidean distances to the other candidates, and outputs the one with minimal sum. A notable property of Krum(·) is shown in Lemma 1. Lemma 1 ((Blanchard et al., 2017)). Let v1, . . . , vm ∈ Rd be any i.i.d. random vectors such that vk ∼ G with E[G] = g and E [ ‖G− g‖2] ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If 2q + 2 < m, then Krum(·) is ∆0-Byzantine resilient, where
∆0 =
( 6m− 6q + 4q(m− q − 2) + 4q
2(m− q − 1) m− 2q − 2
) V.
4.2 MEAN-BASED AGGREGATION RULES
An important class of aggregation rules is mean-based aggregation rules. Mathematically, an aggregation rule Aggr(·) is a mean-based one if Aggr(v1, v2, . . . , vm) is guaranteed to be a convex combination of v1, v2, . . ., vm. Roughly speaking, a mean-based rule produces an averaged value of the input data. For instance, the arithmetic mean Mean(v1, v2, . . . , vm) = 1m ∑m k=1 vk is the simplest mean-based aggregation rule. However, simple mean-based aggregation rules are in general not robust under Byzantine attacks, in the sense that they do not satisfy Byzantine resilience conditions (Yin et al., 2018).
In the following, we show that by carefully handling the data, certain mean-based rules can satisfy Byzantine resilience conditions. We first introduce the concept of b-trimmed means based on order statistics as in Definition 2. This concept is closely related to the coordinate-wise trimmed mean defined in (Yin et al., 2018). Definition 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m). For an integer b with 0 ≤ b < m/2, the b-trimmed mean of u1, u2, . . . , um is defined as
Tmeanb(u1, u2, . . . , um) = 1
m− 2b m−b∑ k=b+1 u(k).
For d-vectors v1, v2, . . . , vm ∈ Rd, the b-trimmed mean Tmeanb(v1, v2, . . . , vm) ∈ Rd is defined in the component-wise sense, i.e.,
Tmeanb(v1, v2, . . . , vm) = d∑ i=1 ei · Tmeanb ( 〈v1, ei〉, 〈v2, ei〉, . . . , 〈vm, ei〉 ) ,
where ei is the i-th column of the d× d identity matrix.
Intuitively, aggregation rules using b-trimmed means have opportunities to discard abnormal values from the input data. Lemma 2 shows that b-trimmed means are usually within a meaningful range under mild assumptions. Lemma 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m), and ũ1, ũ2, . . . , ũm be attacked copies of u1, u2, . . . , um with up to q Byzantine elements. If the order statistics of ũk’s is ũ(1) ≤ ũ(2) ≤ · · · ≤ ũ(m), then for q ≤ b < m/2, we have
u(k−b) ≤ u(k−q) ≤ ũ(k) ≤ u(k+q) ≤ u(k+b), (b+ 1 ≤ k ≤ m− b).
Proof. Let ũ(k0) be the largest non-Byzantine element in {ũ(1), ũ(2) . . . , ũ(k)}. Since the number of Byzantine elements is less than or equal to q, we have ũ(k) ≥ ũ(k0) ≥ u(k−q) ≥ u(k−b) if k ≥ b+1. The proof of ũ(k) ≤ u(k+q) ≤ u(k+b) for k ≤ m− b is similar.
With the help of Lemma 2, we can prove that Tmeanb(·) is Byzantine resilient when b is greater or equal to the number of Byzantine nodes. The result is formulated as Theorem 1. Theorem 1. Let v1, . . . , vm ∈ Rd be i.i.d. random vectors such that vk ∼ G with E[G] = g and E[G− g]2 ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If q ≤ b < m/2, then Tmeanb(·) is ∆1-Byzantine resilient, where
∆1 = mV
(m− b)2 .
Proof. See the Appendix.
The exact time complexity for evaluating Tmeanb(ṽ1, ṽ2, . . . , ṽm) has no closed-form expression in terms of (m, b, d). However, by sorting in each dimension, we can evaluate Tmeanb(ṽ1, ṽ2, . . . , ṽm) usingO(dm logm) operations. The cost is almost linear in practice, and is not much more expensive than evaluating Mean(·). Hence, Tmean(·) improves the robustness with only negligible overhead compared to Mean(·).
5 CONVERGENCE ANALYSIS
In this section, we conduct a convergence analysis for SGD using a Byzantine resilient aggregation rule, such as Krum(·) and Tmean(·). Since a general convex optimization setting is used, we quote some classic convex analysis theories (see, e.g., (Bubeck, 2014)), as shown in Lemma 3. Lemma 3. Let F : Rd → R be a µ-strongly convex and L-smooth function. Then for x, y ∈ Rd and α ∈ [0, 1], one has
〈∇F (x)−∇F (y), x− y〉 ≥ αµ‖x− y‖2 + 1− α L ‖∇F (x)−∇F (y)‖2.
Proof. The µ-strong convexity and L-smoothness of F (x) implies
〈∇F (x)−∇F (y), x− y〉 ≥ µ‖x− y‖2,
〈∇F (x)−∇F (y), x− y〉 ≥ 1 L ‖∇F (x)−∇F (y)‖2.
The conclusion is a simple convex combination of these inequalities.
Using Lemma 3 we establish the convergence of SGD as shown in Theorem 2. The expected error bound consists of a linear convergence term similar to that for usual gradient descent methods, and a term caused by randomness. The theorem also suggests theoretically the largest step size: γ = L−1.
Theorem 2. Suppose that the SGD method with (3) is adopted to solve problem (1), where F : Rd → R is a µ-strongly convex and L-smooth function. Let v(t)1 , . . . , v (t) m be local gradients at t-th iteration, and ṽ(t)1 , . . . , ṽ (t) m be the corresponding attacked copies. If the aggregation rule Aggr(·) is ∆-Byzantine resilient, and the step size γ satisfies γ ≤ L−1, then
E [ ‖x(t) − x∗‖ ] ≤ ηt‖x(0) − x∗‖+ 1 +
√ 1− γµ µ √ ∆,
where x∗ is the exact solution, and
η = √ 1− γµ ≥ √
1− µ L .
Proof. See the Appendix.
We remark that if we allow a general α ∈ [0, 1] in the proof of Theorem 2 and adjust the corresponding step size as γ ≤ 2(1− α)L−1, the decay ratio η is then bounded through
η = √ 1− 2αγµ ≥ √ 1− 4α(1− α)µ L ≥ √ 1− µ L .
The optimal lower bound is achieved when α = 1/2 and γ = L−1.
6 EXPERIMENTS
In this section we verify the robustness and convergence of Tmean aggregation rules by experiments. We use a multi-layer perceptron with two hidden layers to handwritten digits classification on the MNIST data set, in m = 20 worker processes, we repeat each experiment for ten times and take the average value. Table 1 shows the detailed network structures of the MLP used in our experiments. Then, we conduct recognition on convolutional neural network in the Cifar10 dataset, repeat each experiment for ten times and report the averaged value. In our experiments, we use Mean(·) without Byzantine attack as a reference. We use top-1 or top-3 accuracy on testing sets as the evaluation metrics. Table 2 lists the details of the data set and the default hyper-parameters for the corresponding model. Our experiments are implemented by Python 3.7.4 with packages TensorFlow 2.4.1 and NumPy 1.19.2, on a physical node Intel i7-9700 CPU with 32 GB of memory and two NVIDIA GeForce RTX 1080 Ti.
6.1 EXPERIMENT SETTING
In our experiment setting, 6 out of the 20 vectors are Byzantine vectors (i.e., m = 20, q = 6), different values of b will affect different convergence performance. For the MNIST dataset, we take b = 6 and b = 8 to train the experiment; and we take b = 8 on the Cifar10 dataset to conduct recognition task.
6.2 GAUSSIAN ATTACK
In Gaussian attack experiment, We use the Gaussian random vector with zero mean and isotopic covariance matrix with standard deviation 200 instead of some correct gradient vectors.
From Table 3 for the MNIST dataset, we find that compared with Mean(·) without Byzantine, Tmeanb(·) works well, and is more robust when b = 8. However, without trimming the data, Mean(·) is not robust under Byzantine attack. The method of Krum(·) is also robust, while it is relatively weaker than Tmean(·). For the Cifar10 dataset, we set b = 8. From Table 4, the aggregation rule based on the Mean(·) is not as robust as other aggregation rules, but after trimming the Mean(·), the output is more robust than before.
6.3 OMNISCIENT ATTACK
Assuming that the attackers (Byzantine nodes) know all the correct gradients information. For each Byzantine gradient vector, all correct gradients of the gradient are replaced by their negative sum. In other words, this attack attempts to make the parameter server go into the opposite direction.
For the MNIST dataset, we see from Table 5 that when the Byzantine node makes the gradient update direction deviate from the maximum actual update direction, Mean(·) cannot be robust. On the contrary, Krum(·) is more robust, while Tmean(·) shows only weak robustness. Because trim is a part of the data intercepted from the mean value for aggregation, it removes the head and tail parts of the entire data set; such that it does not decrease much compared to the overall set mean accuracy when it is attacked by Byzantine.
For the Cifar10 dataset, Table 6 shows that Krum(·) is more robust than others. Mean-based aggregation behaved badly in data robustness under omniscient attack. If we trim Mean(·), it can be clearly seen that, the output becomes more robust. However, under this attack, Tmean(·) is not as robust as Krum(·).
6.4 GENERAL ATTACK WITH MULTIPLE SERVERS
We evaluate the robust aggregation rules under a more general and realistic type of attack. It is very popular to partition the parameters into disjoint subsets, and use multiple server nodes to store and aggregate them. We assume that the parameters are evenly partitioned and assigned to the server nodes. The attacker picks one server, and manipulates any floating number by multiplying −1020 with probability of 0.05%. Because the attacker randomly manipulates the values, with the goal that in some iterations the assumptions/prerequisites of the robust aggregation rules are broken, which crashes the training. Such an attack requires less global information, and can be concentrated on one single server, which makes it more realistic and easier to implement.
For the MNIST dataset, in Table 7 we evaluate the performance of all robust aggregation rules under multiple serves attack. The number of servers is 20. For Krum(·) and Mean(·), we set the estimated Byzantine number q = 6. We find that Krum(·) and not passed. The intercepted Mean(·) aggregation rules cannot be robust. In this case, they cannot converge to the global optimum. On the other hand, for Tmean(·), different values of b will affect different convergence performance. And if we take b = 6, it can also behave robustly.
For the Cifar10 dataset in Table 8, we can find the advantage of the Tmean(·). Neither Krum(·) nor Mean(·) is robust, while Tmean(·) can make the learning robust.
6.5 EXPERIMENT CONCLUSION
From the above experiments, we find that it is difficult for Mean(·) to ensure the robustness of the data when it is attacked by any Byzantine node. When multiple server nodes are used at the same time, Krum(·) also becomes vulnerable. However, after the data is properly trimmed, Tmean(·) can maintain data robustness. Under omniscient attacks, Tmean(·) is not as robust as Krum(·).But Tmean(·) still improves a lot compared to Mean(·) in this case. The data robustness of Tmean(·) is also related to the value of b. We find that when the value of b is closer to m/2, it can achieve stronger robustness.
7 CONCLUSIONS
We analyzed the aggregation rules of Tmean(·). We demonstrated that the effectiveness of our approaches can make the FL model robust to Byzantine attacks. We used three different Byzantine node attacks to show that the original data set can be more robustly handled after partial data trimming and averaging operations.
This work focuses on Tmean(·). In future work we plan to refine the trimming range of Tmean(·) and prove its convergence in a non-convex environment. At the same time, we will add momentum on the basis of Tmean(·), or add some constraints to strengthen its robustness. | 1. What is the main contribution of the paper regarding federated learning?
2. What are the concerns regarding the proposed aggregation rule Tmean and its comparison with other works?
3. How does the reviewer assess the empirical results and their solidity?
4. What are the limitations of the analysis under the i.i.d. assumption in the context of federated learning?
5. Do you have any questions or suggestions for improving the paper? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose the aggregation rule Tmean for federated learning. They show the effectiveness and Byzantine robustness of Tmean by both theoretical analysis and empirical evaluation.
Review
This paper is of good readability. The main idea of this work is not difficult to understand. However, there are some concerns as listed below:
The proposed aggregation rule Tmean seems identical to 'coordinate-wise trimmed mean' [1]. Also, the theoretical estimation and analysis in the paper seem rougher than that in [1]. Could the authors comment on this and compare their work with [1]?
The empirical results in this work are not solid enough. The authors only compare their methods with Krum and mean. There are many other aggregation rules, such as geometric median, centered-clipping [2], and so on. Also, the authors do not evaluate their methods under some advanced attacks [3, 4].
The authors propose Tmean for federated learning (FL), where the data of different clients can be heterogeneous. However, their analysis is based on the i.i.d. assumption, which usually does not hold in FL.
[1] Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. PMLR, 2018.
[2] Sai Praneeth Karimireddy, Li He, and Martin Jaggi. Learning from history for Byzantine robust optimization. In ICML 2021 - 37th International Conference on Machine Learning, 2021.
[3] Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation. In UAI - Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, 2020.
[4] Moran Baruch, Gilad Baruch, and Yoav Goldberg. A Little Is Enough: Circumventing Defenses For Distributed Learning. NeurIPS, 2019. |
ICLR | Title
FEDERATED LEARNING FRAMEWORK BASED ON TRIMMED MEAN AGGREGATION RULES
Abstract
This paper studies the problem of information security in the distributed learning framework. In particular, we consider the clients will always be attacked by Byzantine nodes and poisoning in the federated learning. Typically, aggregation rules are utilized to protect the model from the attacks in federated learning. However, classical aggregation methods such as Krum(·) and Mean(·) are not capable enough to deal with Byzantine attacks in which general deviations and multiple clients are attacked at the same time. We propose new aggregation rules, Tmean(·), to the federated learning algorithm, and propose a federated learning framework based on Byzantine resilient aggregation algorithm. Our Tmean(·) rules are derived from Mean(·) by appropriately trimming some of the values before averaging them. Theoretically, we provide theoretical analysis and understanding of Tmean(·). Extensive experiments validate the effectiveness of our approaches.
1 INTRODUCTION
As one special case of distributed machine learning, federated learning (FL) draws increasing research attention recently. FL has become one promising approach to enable clients collaboratively to learn a shared model with the decentralized and private data on each client node. Thus it is of central importance to keep the node information secured in FL. Unfortunately, FL is very vulnerable to software/hardware errors and adversarial attacks; especially Byzantine attack from distributed systems has arose to be a key node attack sample for federated learning.
To ensure the FL model resistance to Byzantine attack, research focuses are made on how to introduce aggregation rules into the gradient information iteration process, and how to ensure that this aggregation rule can make the data robust. In particular, classical approaches to avoid the Byzantine failures would employ the state machine replication strategy (Alistarh et al., 2018), which can be roughly categorized into two ways in distributed machine learning: (1) the processes agree on a sample of data based on which the clients update their local parameter vectors; (2) the clients agree on how the parameter vector should be updated (Blanchard et al., 2017). The former ones demand transmitting data samples to each individual node, resulting in high costs. The latter ways are not reliable neither, as we cannot detect whether clients are trustworthy or not, from the mixed Byzantine vectors. The attacker can know any information about the process of FL, and can use any vectors to initiate the attack during the node’s information transmission (Yin et al., 2018). More specifically, the data between machines can be replaced by any value. This problem is not fully addressed in previous works.
When facing the attacks from Byzantine nodes, the FL models have to rely on robust aggregation rules to minimize the influence of the Byzantine attack on the data model (Bottou, 2010). For instance, Krum(·) is a strong aggregation rule designed to identify an honest node such that it can effectively prevent Byzantine attacks in most cases (Blanchard et al., 2017). However, if the Byzantine node attack is changed from the original single miner node attack to multiple server attacks, Krum(·) is not able to ensure the learning process robustness to noisy data. As another class of aggregation rules, simple mean-based aggregation rules can maintain learning robustness if it is not attacked by Byzantine nodes. However, the Byzantine attack will make the update direction of node gradient information largely deviate from the original function by using simple mean-based aggregation rules. Some variants of mean-based aggregation rules, such as geometric median (Blanchard
et al., 2017), marginal median (Alistarh et al., 2018), are the classical methods to solve the Byzantine node attack problem. But they are not much more robust than simple mean-based aggregation rules in the case of some large deviation Byzantine attacks. The main difficulty that prevents mean-based aggregation rules from robust is the unstable distribution of data (Jin et al., 2020). We find that the difficulty can be tackled, if the data is averaged by trimming a part of the data and then imported into the aggregation rules. This motivates our work in this paper.
Most of FL approaches are built upon the Stochastic Gradient Descent (SGD) algorithm (Castro et al., 1999) or its variants, and the statistical properties of SGD such as convergence are well developed. Our approach also employs the SGD; and the typical iterative process of SGD algorithm in distributed system is represented by Figure 1. First, a client, known as a miner node, estimates the gradient of the node, makes an estimate of the deviation between the estimated information and the ideal gradient information (El-Mhamdi et al., 2020), then passes this information to the server node in the network, and finally update the gradient there. So the miner node network transmits the information to the master server, and the master server then passes the gradient update information through a series of aggregation operations. When the aggregation conditions are met, the server transmits the information to the distribution network, and then transmitted to the miner nodes. This is a cyclic process, such that the gradient information is continuously updated in the entire network (Li et al., 2014).
In this paper, we mainly propose new aggregation rules, Tmean(·), by trimming part of the data before the average operation. We provide theoretical understandings of our aggregation rules, especially, why they are robust to the Byzantine attack. Through attack experiments and mathematical proofs, we have concluded that appropriately trimming the original data and averaging can make the model more robust from the decentralized data.
Specifically, in section 3 we introduce the federated learning Byzantine distributed system model, briefly describes the working principle of SGD, and summarizes the update iteration rules based on aggregation rules. Then, we present the concept of Byzantine resilience, and the conditions to satisfy the aggregation rules of Byzantine resilience. We provide concept of trimmed mean and the rigorous theoretical proof and understanding of Tmean(·) based aggregation rules in section 4. Then, we prove the convergence of the proposed federated learning aggregation rules in section 5. In section 6, we conduct experiments by Gaussian attack, omniscient attack and multiple servers attack. Under these attacks, Tmean(·)-based FL aggregation rules can still maintain robustness. These experiments thus validate the effectiveness of our approaches.
Contributions: (1) We present new aggregation rules, Tmean(·), to the Byzantine resilient federated learning algorithm, and propose federated learning frameworks based on aggregation algorithm. Our proposed approaches are shown to be robust to Byzantine attacks.
(2) We provide rigorous theoretical proof and understanding of our approaches and aggregation rules. To the best of our knowledge, these theorems and theoretical understandings are for the first time contributed to the community. Critically, we make convergence certificates and prove that Tmean(·) can converge in the general convex optimization setting. (3) Empirically, we demonstrate that the effectiveness of our approaches can make the FL model robust to Byzantine attack.
2 RELATED WORK
Federated Learning. Federated learning has become a prominent distributed learning paradigm. In (Jin et al., 2020), the authors proposed Stochastic-Sign SGD, a parameter estimation method with convergence guarantees. It uses a gradient compressor based on random symbols to unify that the gradient is updated in the framework. The FedAvg algorithm was first proposed by (Konečnỳ et al., 2016), and many scholars subsequently improved it based on this algorithm. (Karimireddy et al., 2020) pointed out that FedAvg can be improved by adding one additional parameters control variables to correct the client drift, and proved that FedAvg may be seriously affected by the gradient difference of different clients, and may even be slower than SGD.
Byzantine attack in FL. To make the secure transmission of the master server node, the FL models have to deal with the Byzantine attack. (Yin et al., 2018) showed that certain aggregation rules in the master server node can ensure the robustness of the data. (Blanchard et al., 2017) pointed out that Krum(·) can be used to make the robustness of data by computing the local sum of squared Euclidean distance to the other candidates, and outputting the one with minimal sum. When the gradient is updated to the saddle point, there is no guarantee that the SGD algorithm converges to the global optimum. Some scholars have proposed ByzantinePGD (Yin et al., 2019), which can escape saddle points and false local minimums, and can converge to an approximate true local minimum with low iteration complexity. Later, in the strong anti-Byzantine model, some scholars carried out poisoning attacks on Byzantine robust data, from the original data collection process to the subsequent information exchange process, and at the same time, these attacks were defended accordingly, in (Zhao et al., 2021). The research on the Byzantine structure model of federated learning has been expanded once again (Zhao et al., 2021).
Attack and Defence. In the framework of federated learning, miner nodes are usually attacked, such as Byzantine node attacks, poisoning attacks (Zhao et al., 2021), gradient leakage (Wei et al., 2020), etc. There exist various defense methods also, such as robust aggregation, secure aggregation, encryption, etc. Our paper mainly studies Byzantine node attacks. Byzantine node attacks can usually be summarized into two types: (1) To update gradient information of nodes, nodes are replaced by Byzantine nodes, and normal worker nodes cannot make judgments so that the gradient estimates deviate from the actual gradient update direction. (2) During the gradient process, the local node suffering from the interference of the Byzantine node, cannot reach a consensus with the master node, making the entire process unable to proceed normally (Lamport et al., 2019). Krum(·) is a popular aggregation rule to deal with these node attacks. We shall propose Tmean(·) as alternative aggregation rules.
3 PRELIMINARY AND PROBLEM SETUP
3.1 FEDERATED LEARNING SETTING
Federated learning was first proposed in (Konečnỳ et al., 2016), where the prevalent asynchronous SGD is used to update a global model in a distributed fashion. A pioneering work in this field proposed the currently most widely used algorithm, FedAvg (McMahan et al., 2017), which is also the first synchronous algorithm dedicated to federated setting. Recent studies attempt to expand federated learning with the aim of providing learning in more diverse and practical environments
Server: Initialize x0 ← rand(); for t = 0, 1, . . . , T do
Broadcast x(t) to all the workers; Wait until all the gradients ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m arrive; Compute G(t) = Aggr(ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m );
Update the parameter x(t+1) ← x(t) − γ(t)G(t); end
Worker: for t = 0, 1, . . . , T do
Receive x(t) from the server; Compute and send the local randomized gradient v(t) = ∇F (x(t), ξk) to the server;
end Algorithm 1: Distributed synchronous SGD with robust aggregation server.
such as multi-task learning, generative models, continual learning, and data with noisy labels. However, these algorithms may obtain suboptimal performance when miners participating in FL have non-independently and identical distributions (non-i.i.d.) (Zhao et al., 2018).
While the convergence of FedAvg on such settings was initially shown by experiments in (McMahan et al., 2017), it does not guarantee performance as good as that in an i.i.d. setting. These algorithms pointed out the issue have major limitations, such as privacy violation by partial global sharing of local data (Zhao et al., 2018) or no indication of improvement over baseline algorithms such as FedAvg (Hsieh et al., 2020). Our paper focuses on a general federated setting.
In order to better study the problem of aggregation in federated learning, we consider the following optimization problem:
min x∈Rd F (x), (1)
where F (x) = Eξ∼D[f(x; ξ)] is a smooth convex function, ξ is sampled from some unknown distribution D. Ideally, the problem (1) can be solved by the gradient descent method as
x(t+1) ← x(t) − γ(t)∇F (x(t)), (2) where γ(t) is the learning rate at t-th round. We assume that there exists at least one minimizer of F (x), denoted by x∗, that satisfies∇F (x∗) = 0. The problem (1) is solved in a distributed manner with m miner nodes, and up to q of them may be Byzantine nodes. The detailed algorithm of distributed synchronous SGD with aggregation rule Aggr(·) is shown in Algorithm 1. In each iteration, each miner samples n i.i.d. data points from the distribution D, and computes the gradient of the local empirical loss. Using a certain aggregation rule Aggr(·), the server collects and aggregates the gradients sent by the miners, and estimate the gradient through ∇F (x(t)) ≈ Aggr(v1, v2, . . . , vm). Without Byzantine failures, the k-th worker calculates v(t)k ∼ G(t), where G(t) = ∇f(x(t), ξ). When there exist Byzantine faults, v (t) k can be replaced by any arbitrary value ṽ(t)k , and hence
∇F (x(t)) ≈ Aggr(ṽ1, ṽ2, . . . , ṽm). (3)
3.2 BYZANTINE RESILIENCE
In the following we introduce the concept of Byzantine resilience. Suppose that in a specific iteration, the correct vectors v1, v2, . . . , vm are i.i.d. samples drawn from the random variable G = ∇f(x; ξk), where E[G] = g is an unbiased estimator of the gradient based on the current parameter x. Thus, E[vk] = E[G] = g for k ∈ {1, 2, . . . ,m}. We simplify the notations by omitting the index of iteration t. The following Definition 1 defines the concept of ∆-Byzantine resilience. For simplicity, we sometimes use Byzantine resilient in short when the concrete value of ∆ is either clear from the context or not important.
Definition 1 (Byzantine Resilience). Suppose that 0 ≤ q ≤ m, and ∆ is a given positive number. Let v1, v2, . . . , vm be i.i.d. random vectors in Rd, where vk ∼ G with E[G] = g. Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, such that at least m− q of which are equal to the corresponding vk’s while the rest are arbitrary vectors in Rd. An aggregation rule Aggr(·) is said to be ∆-Byzantine resilient if
E [ ‖Aggr(ṽ1, ṽ2, . . . , ṽm)− g‖2 ] ≤ ∆.
4 FL FRAMEWORK WITH ROBUST AGGREGATION
Nowadays, the aggregation rules based on the FL framework are mainly based on Krum(·) and Mean(·). They are usually used in the defense of Byzantine node attacks. However, the aggregation rules based on Mean(·) often cannot exhibit strong robustness, if it is affected by Byzantine nodes. The attack makes the gradient estimation direction deviate too far from the actual gradient update direction; and it cannot guarantee the robustness of the data. Thus we shall propose trimmed meanbased aggregation rules to resolve this issue.
4.1 KRUM
The Krum rule is a strong aggregation rule that attempts to identify an honest computing node, and discards the data of other computing nodes. The data from the identified honest computing node is used in the next round of algorithm. The selection strategy is to find one whose data is closest to that on other computing nodes. In other words, it computes the local sum of squared Euclidean distances to the other candidates, and outputs the one with minimal sum. A notable property of Krum(·) is shown in Lemma 1. Lemma 1 ((Blanchard et al., 2017)). Let v1, . . . , vm ∈ Rd be any i.i.d. random vectors such that vk ∼ G with E[G] = g and E [ ‖G− g‖2] ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If 2q + 2 < m, then Krum(·) is ∆0-Byzantine resilient, where
∆0 =
( 6m− 6q + 4q(m− q − 2) + 4q
2(m− q − 1) m− 2q − 2
) V.
4.2 MEAN-BASED AGGREGATION RULES
An important class of aggregation rules is mean-based aggregation rules. Mathematically, an aggregation rule Aggr(·) is a mean-based one if Aggr(v1, v2, . . . , vm) is guaranteed to be a convex combination of v1, v2, . . ., vm. Roughly speaking, a mean-based rule produces an averaged value of the input data. For instance, the arithmetic mean Mean(v1, v2, . . . , vm) = 1m ∑m k=1 vk is the simplest mean-based aggregation rule. However, simple mean-based aggregation rules are in general not robust under Byzantine attacks, in the sense that they do not satisfy Byzantine resilience conditions (Yin et al., 2018).
In the following, we show that by carefully handling the data, certain mean-based rules can satisfy Byzantine resilience conditions. We first introduce the concept of b-trimmed means based on order statistics as in Definition 2. This concept is closely related to the coordinate-wise trimmed mean defined in (Yin et al., 2018). Definition 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m). For an integer b with 0 ≤ b < m/2, the b-trimmed mean of u1, u2, . . . , um is defined as
Tmeanb(u1, u2, . . . , um) = 1
m− 2b m−b∑ k=b+1 u(k).
For d-vectors v1, v2, . . . , vm ∈ Rd, the b-trimmed mean Tmeanb(v1, v2, . . . , vm) ∈ Rd is defined in the component-wise sense, i.e.,
Tmeanb(v1, v2, . . . , vm) = d∑ i=1 ei · Tmeanb ( 〈v1, ei〉, 〈v2, ei〉, . . . , 〈vm, ei〉 ) ,
where ei is the i-th column of the d× d identity matrix.
Intuitively, aggregation rules using b-trimmed means have opportunities to discard abnormal values from the input data. Lemma 2 shows that b-trimmed means are usually within a meaningful range under mild assumptions. Lemma 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m), and ũ1, ũ2, . . . , ũm be attacked copies of u1, u2, . . . , um with up to q Byzantine elements. If the order statistics of ũk’s is ũ(1) ≤ ũ(2) ≤ · · · ≤ ũ(m), then for q ≤ b < m/2, we have
u(k−b) ≤ u(k−q) ≤ ũ(k) ≤ u(k+q) ≤ u(k+b), (b+ 1 ≤ k ≤ m− b).
Proof. Let ũ(k0) be the largest non-Byzantine element in {ũ(1), ũ(2) . . . , ũ(k)}. Since the number of Byzantine elements is less than or equal to q, we have ũ(k) ≥ ũ(k0) ≥ u(k−q) ≥ u(k−b) if k ≥ b+1. The proof of ũ(k) ≤ u(k+q) ≤ u(k+b) for k ≤ m− b is similar.
With the help of Lemma 2, we can prove that Tmeanb(·) is Byzantine resilient when b is greater or equal to the number of Byzantine nodes. The result is formulated as Theorem 1. Theorem 1. Let v1, . . . , vm ∈ Rd be i.i.d. random vectors such that vk ∼ G with E[G] = g and E[G− g]2 ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If q ≤ b < m/2, then Tmeanb(·) is ∆1-Byzantine resilient, where
∆1 = mV
(m− b)2 .
Proof. See the Appendix.
The exact time complexity for evaluating Tmeanb(ṽ1, ṽ2, . . . , ṽm) has no closed-form expression in terms of (m, b, d). However, by sorting in each dimension, we can evaluate Tmeanb(ṽ1, ṽ2, . . . , ṽm) usingO(dm logm) operations. The cost is almost linear in practice, and is not much more expensive than evaluating Mean(·). Hence, Tmean(·) improves the robustness with only negligible overhead compared to Mean(·).
5 CONVERGENCE ANALYSIS
In this section, we conduct a convergence analysis for SGD using a Byzantine resilient aggregation rule, such as Krum(·) and Tmean(·). Since a general convex optimization setting is used, we quote some classic convex analysis theories (see, e.g., (Bubeck, 2014)), as shown in Lemma 3. Lemma 3. Let F : Rd → R be a µ-strongly convex and L-smooth function. Then for x, y ∈ Rd and α ∈ [0, 1], one has
〈∇F (x)−∇F (y), x− y〉 ≥ αµ‖x− y‖2 + 1− α L ‖∇F (x)−∇F (y)‖2.
Proof. The µ-strong convexity and L-smoothness of F (x) implies
〈∇F (x)−∇F (y), x− y〉 ≥ µ‖x− y‖2,
〈∇F (x)−∇F (y), x− y〉 ≥ 1 L ‖∇F (x)−∇F (y)‖2.
The conclusion is a simple convex combination of these inequalities.
Using Lemma 3 we establish the convergence of SGD as shown in Theorem 2. The expected error bound consists of a linear convergence term similar to that for usual gradient descent methods, and a term caused by randomness. The theorem also suggests theoretically the largest step size: γ = L−1.
Theorem 2. Suppose that the SGD method with (3) is adopted to solve problem (1), where F : Rd → R is a µ-strongly convex and L-smooth function. Let v(t)1 , . . . , v (t) m be local gradients at t-th iteration, and ṽ(t)1 , . . . , ṽ (t) m be the corresponding attacked copies. If the aggregation rule Aggr(·) is ∆-Byzantine resilient, and the step size γ satisfies γ ≤ L−1, then
E [ ‖x(t) − x∗‖ ] ≤ ηt‖x(0) − x∗‖+ 1 +
√ 1− γµ µ √ ∆,
where x∗ is the exact solution, and
η = √ 1− γµ ≥ √
1− µ L .
Proof. See the Appendix.
We remark that if we allow a general α ∈ [0, 1] in the proof of Theorem 2 and adjust the corresponding step size as γ ≤ 2(1− α)L−1, the decay ratio η is then bounded through
η = √ 1− 2αγµ ≥ √ 1− 4α(1− α)µ L ≥ √ 1− µ L .
The optimal lower bound is achieved when α = 1/2 and γ = L−1.
6 EXPERIMENTS
In this section we verify the robustness and convergence of Tmean aggregation rules by experiments. We use a multi-layer perceptron with two hidden layers to handwritten digits classification on the MNIST data set, in m = 20 worker processes, we repeat each experiment for ten times and take the average value. Table 1 shows the detailed network structures of the MLP used in our experiments. Then, we conduct recognition on convolutional neural network in the Cifar10 dataset, repeat each experiment for ten times and report the averaged value. In our experiments, we use Mean(·) without Byzantine attack as a reference. We use top-1 or top-3 accuracy on testing sets as the evaluation metrics. Table 2 lists the details of the data set and the default hyper-parameters for the corresponding model. Our experiments are implemented by Python 3.7.4 with packages TensorFlow 2.4.1 and NumPy 1.19.2, on a physical node Intel i7-9700 CPU with 32 GB of memory and two NVIDIA GeForce RTX 1080 Ti.
6.1 EXPERIMENT SETTING
In our experiment setting, 6 out of the 20 vectors are Byzantine vectors (i.e., m = 20, q = 6), different values of b will affect different convergence performance. For the MNIST dataset, we take b = 6 and b = 8 to train the experiment; and we take b = 8 on the Cifar10 dataset to conduct recognition task.
6.2 GAUSSIAN ATTACK
In Gaussian attack experiment, We use the Gaussian random vector with zero mean and isotopic covariance matrix with standard deviation 200 instead of some correct gradient vectors.
From Table 3 for the MNIST dataset, we find that compared with Mean(·) without Byzantine, Tmeanb(·) works well, and is more robust when b = 8. However, without trimming the data, Mean(·) is not robust under Byzantine attack. The method of Krum(·) is also robust, while it is relatively weaker than Tmean(·). For the Cifar10 dataset, we set b = 8. From Table 4, the aggregation rule based on the Mean(·) is not as robust as other aggregation rules, but after trimming the Mean(·), the output is more robust than before.
6.3 OMNISCIENT ATTACK
Assuming that the attackers (Byzantine nodes) know all the correct gradients information. For each Byzantine gradient vector, all correct gradients of the gradient are replaced by their negative sum. In other words, this attack attempts to make the parameter server go into the opposite direction.
For the MNIST dataset, we see from Table 5 that when the Byzantine node makes the gradient update direction deviate from the maximum actual update direction, Mean(·) cannot be robust. On the contrary, Krum(·) is more robust, while Tmean(·) shows only weak robustness. Because trim is a part of the data intercepted from the mean value for aggregation, it removes the head and tail parts of the entire data set; such that it does not decrease much compared to the overall set mean accuracy when it is attacked by Byzantine.
For the Cifar10 dataset, Table 6 shows that Krum(·) is more robust than others. Mean-based aggregation behaved badly in data robustness under omniscient attack. If we trim Mean(·), it can be clearly seen that, the output becomes more robust. However, under this attack, Tmean(·) is not as robust as Krum(·).
6.4 GENERAL ATTACK WITH MULTIPLE SERVERS
We evaluate the robust aggregation rules under a more general and realistic type of attack. It is very popular to partition the parameters into disjoint subsets, and use multiple server nodes to store and aggregate them. We assume that the parameters are evenly partitioned and assigned to the server nodes. The attacker picks one server, and manipulates any floating number by multiplying −1020 with probability of 0.05%. Because the attacker randomly manipulates the values, with the goal that in some iterations the assumptions/prerequisites of the robust aggregation rules are broken, which crashes the training. Such an attack requires less global information, and can be concentrated on one single server, which makes it more realistic and easier to implement.
For the MNIST dataset, in Table 7 we evaluate the performance of all robust aggregation rules under multiple serves attack. The number of servers is 20. For Krum(·) and Mean(·), we set the estimated Byzantine number q = 6. We find that Krum(·) and not passed. The intercepted Mean(·) aggregation rules cannot be robust. In this case, they cannot converge to the global optimum. On the other hand, for Tmean(·), different values of b will affect different convergence performance. And if we take b = 6, it can also behave robustly.
For the Cifar10 dataset in Table 8, we can find the advantage of the Tmean(·). Neither Krum(·) nor Mean(·) is robust, while Tmean(·) can make the learning robust.
6.5 EXPERIMENT CONCLUSION
From the above experiments, we find that it is difficult for Mean(·) to ensure the robustness of the data when it is attacked by any Byzantine node. When multiple server nodes are used at the same time, Krum(·) also becomes vulnerable. However, after the data is properly trimmed, Tmean(·) can maintain data robustness. Under omniscient attacks, Tmean(·) is not as robust as Krum(·).But Tmean(·) still improves a lot compared to Mean(·) in this case. The data robustness of Tmean(·) is also related to the value of b. We find that when the value of b is closer to m/2, it can achieve stronger robustness.
7 CONCLUSIONS
We analyzed the aggregation rules of Tmean(·). We demonstrated that the effectiveness of our approaches can make the FL model robust to Byzantine attacks. We used three different Byzantine node attacks to show that the original data set can be more robustly handled after partial data trimming and averaging operations.
This work focuses on Tmean(·). In future work we plan to refine the trimming range of Tmean(·) and prove its convergence in a non-convex environment. At the same time, we will add momentum on the basis of Tmean(·), or add some constraints to strengthen its robustness. | 1. What is the focus of the paper regarding federated learning?
2. What are the strengths of the proposed approach, particularly in terms of robust aggregation?
3. What are the weaknesses of the paper, especially regarding its originality and contributions compared to prior works?
4. Do you have any concerns about the assumptions made in the paper regarding data distributions and optimization rounds?
5. How does the reviewer assess the relevance and significance of the paper's content in the context of distributed learning and Byzantine attacks? | Summary Of The Paper
Review | Summary Of The Paper
This work analyses the use of trimmed mean as robust aggregation rule in federated learning. The study proposes a convergence analysis of the proposed framework, while the robust aggregation schement is assesed on different benchmarks with respect to standard averaging, and to the previous work of Blanchard et al.
Review
The problem of robust aggregation in FL is very relevant, and the solution investigated in this study is meaningful. My main concern with this paper consists in the lack of originality, as there are already a number of works and frameworks focusing on robust FL aggregation. In particular, Trimmed mean has been already proposed in the previously published work of Yin et al [1]. In that work, FL aggregation based on trimmed mean was already proposed and theoretically investigated, along with other additional aggregation methods. This work is not cited by the authors. In particular, Theorem 2 of this paper seems to be already present in Yin et al.
Moreover, this work is not necessarily focused on FL. The authors assume that clients have the same data distributions and perform
K
=
1
SGD at every optimization round. The theoretical setting considered is the one of Distributed Learning and its resilience to Byzantine attacks has already been heavily investigated too.
[1] Yin, Dong, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. "Byzantine-robust distributed learning: Towards optimal statistical rates." In International Conference on Machine Learning, pp. 5650-5659. PMLR, 2018. |
ICLR | Title
FEDERATED LEARNING FRAMEWORK BASED ON TRIMMED MEAN AGGREGATION RULES
Abstract
This paper studies the problem of information security in the distributed learning framework. In particular, we consider the clients will always be attacked by Byzantine nodes and poisoning in the federated learning. Typically, aggregation rules are utilized to protect the model from the attacks in federated learning. However, classical aggregation methods such as Krum(·) and Mean(·) are not capable enough to deal with Byzantine attacks in which general deviations and multiple clients are attacked at the same time. We propose new aggregation rules, Tmean(·), to the federated learning algorithm, and propose a federated learning framework based on Byzantine resilient aggregation algorithm. Our Tmean(·) rules are derived from Mean(·) by appropriately trimming some of the values before averaging them. Theoretically, we provide theoretical analysis and understanding of Tmean(·). Extensive experiments validate the effectiveness of our approaches.
1 INTRODUCTION
As one special case of distributed machine learning, federated learning (FL) draws increasing research attention recently. FL has become one promising approach to enable clients collaboratively to learn a shared model with the decentralized and private data on each client node. Thus it is of central importance to keep the node information secured in FL. Unfortunately, FL is very vulnerable to software/hardware errors and adversarial attacks; especially Byzantine attack from distributed systems has arose to be a key node attack sample for federated learning.
To ensure the FL model resistance to Byzantine attack, research focuses are made on how to introduce aggregation rules into the gradient information iteration process, and how to ensure that this aggregation rule can make the data robust. In particular, classical approaches to avoid the Byzantine failures would employ the state machine replication strategy (Alistarh et al., 2018), which can be roughly categorized into two ways in distributed machine learning: (1) the processes agree on a sample of data based on which the clients update their local parameter vectors; (2) the clients agree on how the parameter vector should be updated (Blanchard et al., 2017). The former ones demand transmitting data samples to each individual node, resulting in high costs. The latter ways are not reliable neither, as we cannot detect whether clients are trustworthy or not, from the mixed Byzantine vectors. The attacker can know any information about the process of FL, and can use any vectors to initiate the attack during the node’s information transmission (Yin et al., 2018). More specifically, the data between machines can be replaced by any value. This problem is not fully addressed in previous works.
When facing the attacks from Byzantine nodes, the FL models have to rely on robust aggregation rules to minimize the influence of the Byzantine attack on the data model (Bottou, 2010). For instance, Krum(·) is a strong aggregation rule designed to identify an honest node such that it can effectively prevent Byzantine attacks in most cases (Blanchard et al., 2017). However, if the Byzantine node attack is changed from the original single miner node attack to multiple server attacks, Krum(·) is not able to ensure the learning process robustness to noisy data. As another class of aggregation rules, simple mean-based aggregation rules can maintain learning robustness if it is not attacked by Byzantine nodes. However, the Byzantine attack will make the update direction of node gradient information largely deviate from the original function by using simple mean-based aggregation rules. Some variants of mean-based aggregation rules, such as geometric median (Blanchard
et al., 2017), marginal median (Alistarh et al., 2018), are the classical methods to solve the Byzantine node attack problem. But they are not much more robust than simple mean-based aggregation rules in the case of some large deviation Byzantine attacks. The main difficulty that prevents mean-based aggregation rules from robust is the unstable distribution of data (Jin et al., 2020). We find that the difficulty can be tackled, if the data is averaged by trimming a part of the data and then imported into the aggregation rules. This motivates our work in this paper.
Most of FL approaches are built upon the Stochastic Gradient Descent (SGD) algorithm (Castro et al., 1999) or its variants, and the statistical properties of SGD such as convergence are well developed. Our approach also employs the SGD; and the typical iterative process of SGD algorithm in distributed system is represented by Figure 1. First, a client, known as a miner node, estimates the gradient of the node, makes an estimate of the deviation between the estimated information and the ideal gradient information (El-Mhamdi et al., 2020), then passes this information to the server node in the network, and finally update the gradient there. So the miner node network transmits the information to the master server, and the master server then passes the gradient update information through a series of aggregation operations. When the aggregation conditions are met, the server transmits the information to the distribution network, and then transmitted to the miner nodes. This is a cyclic process, such that the gradient information is continuously updated in the entire network (Li et al., 2014).
In this paper, we mainly propose new aggregation rules, Tmean(·), by trimming part of the data before the average operation. We provide theoretical understandings of our aggregation rules, especially, why they are robust to the Byzantine attack. Through attack experiments and mathematical proofs, we have concluded that appropriately trimming the original data and averaging can make the model more robust from the decentralized data.
Specifically, in section 3 we introduce the federated learning Byzantine distributed system model, briefly describes the working principle of SGD, and summarizes the update iteration rules based on aggregation rules. Then, we present the concept of Byzantine resilience, and the conditions to satisfy the aggregation rules of Byzantine resilience. We provide concept of trimmed mean and the rigorous theoretical proof and understanding of Tmean(·) based aggregation rules in section 4. Then, we prove the convergence of the proposed federated learning aggregation rules in section 5. In section 6, we conduct experiments by Gaussian attack, omniscient attack and multiple servers attack. Under these attacks, Tmean(·)-based FL aggregation rules can still maintain robustness. These experiments thus validate the effectiveness of our approaches.
Contributions: (1) We present new aggregation rules, Tmean(·), to the Byzantine resilient federated learning algorithm, and propose federated learning frameworks based on aggregation algorithm. Our proposed approaches are shown to be robust to Byzantine attacks.
(2) We provide rigorous theoretical proof and understanding of our approaches and aggregation rules. To the best of our knowledge, these theorems and theoretical understandings are for the first time contributed to the community. Critically, we make convergence certificates and prove that Tmean(·) can converge in the general convex optimization setting. (3) Empirically, we demonstrate that the effectiveness of our approaches can make the FL model robust to Byzantine attack.
2 RELATED WORK
Federated Learning. Federated learning has become a prominent distributed learning paradigm. In (Jin et al., 2020), the authors proposed Stochastic-Sign SGD, a parameter estimation method with convergence guarantees. It uses a gradient compressor based on random symbols to unify that the gradient is updated in the framework. The FedAvg algorithm was first proposed by (Konečnỳ et al., 2016), and many scholars subsequently improved it based on this algorithm. (Karimireddy et al., 2020) pointed out that FedAvg can be improved by adding one additional parameters control variables to correct the client drift, and proved that FedAvg may be seriously affected by the gradient difference of different clients, and may even be slower than SGD.
Byzantine attack in FL. To make the secure transmission of the master server node, the FL models have to deal with the Byzantine attack. (Yin et al., 2018) showed that certain aggregation rules in the master server node can ensure the robustness of the data. (Blanchard et al., 2017) pointed out that Krum(·) can be used to make the robustness of data by computing the local sum of squared Euclidean distance to the other candidates, and outputting the one with minimal sum. When the gradient is updated to the saddle point, there is no guarantee that the SGD algorithm converges to the global optimum. Some scholars have proposed ByzantinePGD (Yin et al., 2019), which can escape saddle points and false local minimums, and can converge to an approximate true local minimum with low iteration complexity. Later, in the strong anti-Byzantine model, some scholars carried out poisoning attacks on Byzantine robust data, from the original data collection process to the subsequent information exchange process, and at the same time, these attacks were defended accordingly, in (Zhao et al., 2021). The research on the Byzantine structure model of federated learning has been expanded once again (Zhao et al., 2021).
Attack and Defence. In the framework of federated learning, miner nodes are usually attacked, such as Byzantine node attacks, poisoning attacks (Zhao et al., 2021), gradient leakage (Wei et al., 2020), etc. There exist various defense methods also, such as robust aggregation, secure aggregation, encryption, etc. Our paper mainly studies Byzantine node attacks. Byzantine node attacks can usually be summarized into two types: (1) To update gradient information of nodes, nodes are replaced by Byzantine nodes, and normal worker nodes cannot make judgments so that the gradient estimates deviate from the actual gradient update direction. (2) During the gradient process, the local node suffering from the interference of the Byzantine node, cannot reach a consensus with the master node, making the entire process unable to proceed normally (Lamport et al., 2019). Krum(·) is a popular aggregation rule to deal with these node attacks. We shall propose Tmean(·) as alternative aggregation rules.
3 PRELIMINARY AND PROBLEM SETUP
3.1 FEDERATED LEARNING SETTING
Federated learning was first proposed in (Konečnỳ et al., 2016), where the prevalent asynchronous SGD is used to update a global model in a distributed fashion. A pioneering work in this field proposed the currently most widely used algorithm, FedAvg (McMahan et al., 2017), which is also the first synchronous algorithm dedicated to federated setting. Recent studies attempt to expand federated learning with the aim of providing learning in more diverse and practical environments
Server: Initialize x0 ← rand(); for t = 0, 1, . . . , T do
Broadcast x(t) to all the workers; Wait until all the gradients ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m arrive; Compute G(t) = Aggr(ṽ(t)1 , ṽ (t) 2 , . . . , ṽ (t) m );
Update the parameter x(t+1) ← x(t) − γ(t)G(t); end
Worker: for t = 0, 1, . . . , T do
Receive x(t) from the server; Compute and send the local randomized gradient v(t) = ∇F (x(t), ξk) to the server;
end Algorithm 1: Distributed synchronous SGD with robust aggregation server.
such as multi-task learning, generative models, continual learning, and data with noisy labels. However, these algorithms may obtain suboptimal performance when miners participating in FL have non-independently and identical distributions (non-i.i.d.) (Zhao et al., 2018).
While the convergence of FedAvg on such settings was initially shown by experiments in (McMahan et al., 2017), it does not guarantee performance as good as that in an i.i.d. setting. These algorithms pointed out the issue have major limitations, such as privacy violation by partial global sharing of local data (Zhao et al., 2018) or no indication of improvement over baseline algorithms such as FedAvg (Hsieh et al., 2020). Our paper focuses on a general federated setting.
In order to better study the problem of aggregation in federated learning, we consider the following optimization problem:
min x∈Rd F (x), (1)
where F (x) = Eξ∼D[f(x; ξ)] is a smooth convex function, ξ is sampled from some unknown distribution D. Ideally, the problem (1) can be solved by the gradient descent method as
x(t+1) ← x(t) − γ(t)∇F (x(t)), (2) where γ(t) is the learning rate at t-th round. We assume that there exists at least one minimizer of F (x), denoted by x∗, that satisfies∇F (x∗) = 0. The problem (1) is solved in a distributed manner with m miner nodes, and up to q of them may be Byzantine nodes. The detailed algorithm of distributed synchronous SGD with aggregation rule Aggr(·) is shown in Algorithm 1. In each iteration, each miner samples n i.i.d. data points from the distribution D, and computes the gradient of the local empirical loss. Using a certain aggregation rule Aggr(·), the server collects and aggregates the gradients sent by the miners, and estimate the gradient through ∇F (x(t)) ≈ Aggr(v1, v2, . . . , vm). Without Byzantine failures, the k-th worker calculates v(t)k ∼ G(t), where G(t) = ∇f(x(t), ξ). When there exist Byzantine faults, v (t) k can be replaced by any arbitrary value ṽ(t)k , and hence
∇F (x(t)) ≈ Aggr(ṽ1, ṽ2, . . . , ṽm). (3)
3.2 BYZANTINE RESILIENCE
In the following we introduce the concept of Byzantine resilience. Suppose that in a specific iteration, the correct vectors v1, v2, . . . , vm are i.i.d. samples drawn from the random variable G = ∇f(x; ξk), where E[G] = g is an unbiased estimator of the gradient based on the current parameter x. Thus, E[vk] = E[G] = g for k ∈ {1, 2, . . . ,m}. We simplify the notations by omitting the index of iteration t. The following Definition 1 defines the concept of ∆-Byzantine resilience. For simplicity, we sometimes use Byzantine resilient in short when the concrete value of ∆ is either clear from the context or not important.
Definition 1 (Byzantine Resilience). Suppose that 0 ≤ q ≤ m, and ∆ is a given positive number. Let v1, v2, . . . , vm be i.i.d. random vectors in Rd, where vk ∼ G with E[G] = g. Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, such that at least m− q of which are equal to the corresponding vk’s while the rest are arbitrary vectors in Rd. An aggregation rule Aggr(·) is said to be ∆-Byzantine resilient if
E [ ‖Aggr(ṽ1, ṽ2, . . . , ṽm)− g‖2 ] ≤ ∆.
4 FL FRAMEWORK WITH ROBUST AGGREGATION
Nowadays, the aggregation rules based on the FL framework are mainly based on Krum(·) and Mean(·). They are usually used in the defense of Byzantine node attacks. However, the aggregation rules based on Mean(·) often cannot exhibit strong robustness, if it is affected by Byzantine nodes. The attack makes the gradient estimation direction deviate too far from the actual gradient update direction; and it cannot guarantee the robustness of the data. Thus we shall propose trimmed meanbased aggregation rules to resolve this issue.
4.1 KRUM
The Krum rule is a strong aggregation rule that attempts to identify an honest computing node, and discards the data of other computing nodes. The data from the identified honest computing node is used in the next round of algorithm. The selection strategy is to find one whose data is closest to that on other computing nodes. In other words, it computes the local sum of squared Euclidean distances to the other candidates, and outputs the one with minimal sum. A notable property of Krum(·) is shown in Lemma 1. Lemma 1 ((Blanchard et al., 2017)). Let v1, . . . , vm ∈ Rd be any i.i.d. random vectors such that vk ∼ G with E[G] = g and E [ ‖G− g‖2] ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If 2q + 2 < m, then Krum(·) is ∆0-Byzantine resilient, where
∆0 =
( 6m− 6q + 4q(m− q − 2) + 4q
2(m− q − 1) m− 2q − 2
) V.
4.2 MEAN-BASED AGGREGATION RULES
An important class of aggregation rules is mean-based aggregation rules. Mathematically, an aggregation rule Aggr(·) is a mean-based one if Aggr(v1, v2, . . . , vm) is guaranteed to be a convex combination of v1, v2, . . ., vm. Roughly speaking, a mean-based rule produces an averaged value of the input data. For instance, the arithmetic mean Mean(v1, v2, . . . , vm) = 1m ∑m k=1 vk is the simplest mean-based aggregation rule. However, simple mean-based aggregation rules are in general not robust under Byzantine attacks, in the sense that they do not satisfy Byzantine resilience conditions (Yin et al., 2018).
In the following, we show that by carefully handling the data, certain mean-based rules can satisfy Byzantine resilience conditions. We first introduce the concept of b-trimmed means based on order statistics as in Definition 2. This concept is closely related to the coordinate-wise trimmed mean defined in (Yin et al., 2018). Definition 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m). For an integer b with 0 ≤ b < m/2, the b-trimmed mean of u1, u2, . . . , um is defined as
Tmeanb(u1, u2, . . . , um) = 1
m− 2b m−b∑ k=b+1 u(k).
For d-vectors v1, v2, . . . , vm ∈ Rd, the b-trimmed mean Tmeanb(v1, v2, . . . , vm) ∈ Rd is defined in the component-wise sense, i.e.,
Tmeanb(v1, v2, . . . , vm) = d∑ i=1 ei · Tmeanb ( 〈v1, ei〉, 〈v2, ei〉, . . . , 〈vm, ei〉 ) ,
where ei is the i-th column of the d× d identity matrix.
Intuitively, aggregation rules using b-trimmed means have opportunities to discard abnormal values from the input data. Lemma 2 shows that b-trimmed means are usually within a meaningful range under mild assumptions. Lemma 2. Let u1, u2, . . . , um be scalars whose order statistics is
u(1) ≤ u(2) ≤ · · · ≤ u(m), and ũ1, ũ2, . . . , ũm be attacked copies of u1, u2, . . . , um with up to q Byzantine elements. If the order statistics of ũk’s is ũ(1) ≤ ũ(2) ≤ · · · ≤ ũ(m), then for q ≤ b < m/2, we have
u(k−b) ≤ u(k−q) ≤ ũ(k) ≤ u(k+q) ≤ u(k+b), (b+ 1 ≤ k ≤ m− b).
Proof. Let ũ(k0) be the largest non-Byzantine element in {ũ(1), ũ(2) . . . , ũ(k)}. Since the number of Byzantine elements is less than or equal to q, we have ũ(k) ≥ ũ(k0) ≥ u(k−q) ≥ u(k−b) if k ≥ b+1. The proof of ũ(k) ≤ u(k+q) ≤ u(k+b) for k ≤ m− b is similar.
With the help of Lemma 2, we can prove that Tmeanb(·) is Byzantine resilient when b is greater or equal to the number of Byzantine nodes. The result is formulated as Theorem 1. Theorem 1. Let v1, . . . , vm ∈ Rd be i.i.d. random vectors such that vk ∼ G with E[G] = g and E[G− g]2 ≤ V . Let ṽ1, ṽ2, . . . , ṽm be attacked copies of vk’s, where up to q of them are Byzantine. If q ≤ b < m/2, then Tmeanb(·) is ∆1-Byzantine resilient, where
∆1 = mV
(m− b)2 .
Proof. See the Appendix.
The exact time complexity for evaluating Tmeanb(ṽ1, ṽ2, . . . , ṽm) has no closed-form expression in terms of (m, b, d). However, by sorting in each dimension, we can evaluate Tmeanb(ṽ1, ṽ2, . . . , ṽm) usingO(dm logm) operations. The cost is almost linear in practice, and is not much more expensive than evaluating Mean(·). Hence, Tmean(·) improves the robustness with only negligible overhead compared to Mean(·).
5 CONVERGENCE ANALYSIS
In this section, we conduct a convergence analysis for SGD using a Byzantine resilient aggregation rule, such as Krum(·) and Tmean(·). Since a general convex optimization setting is used, we quote some classic convex analysis theories (see, e.g., (Bubeck, 2014)), as shown in Lemma 3. Lemma 3. Let F : Rd → R be a µ-strongly convex and L-smooth function. Then for x, y ∈ Rd and α ∈ [0, 1], one has
〈∇F (x)−∇F (y), x− y〉 ≥ αµ‖x− y‖2 + 1− α L ‖∇F (x)−∇F (y)‖2.
Proof. The µ-strong convexity and L-smoothness of F (x) implies
〈∇F (x)−∇F (y), x− y〉 ≥ µ‖x− y‖2,
〈∇F (x)−∇F (y), x− y〉 ≥ 1 L ‖∇F (x)−∇F (y)‖2.
The conclusion is a simple convex combination of these inequalities.
Using Lemma 3 we establish the convergence of SGD as shown in Theorem 2. The expected error bound consists of a linear convergence term similar to that for usual gradient descent methods, and a term caused by randomness. The theorem also suggests theoretically the largest step size: γ = L−1.
Theorem 2. Suppose that the SGD method with (3) is adopted to solve problem (1), where F : Rd → R is a µ-strongly convex and L-smooth function. Let v(t)1 , . . . , v (t) m be local gradients at t-th iteration, and ṽ(t)1 , . . . , ṽ (t) m be the corresponding attacked copies. If the aggregation rule Aggr(·) is ∆-Byzantine resilient, and the step size γ satisfies γ ≤ L−1, then
E [ ‖x(t) − x∗‖ ] ≤ ηt‖x(0) − x∗‖+ 1 +
√ 1− γµ µ √ ∆,
where x∗ is the exact solution, and
η = √ 1− γµ ≥ √
1− µ L .
Proof. See the Appendix.
We remark that if we allow a general α ∈ [0, 1] in the proof of Theorem 2 and adjust the corresponding step size as γ ≤ 2(1− α)L−1, the decay ratio η is then bounded through
η = √ 1− 2αγµ ≥ √ 1− 4α(1− α)µ L ≥ √ 1− µ L .
The optimal lower bound is achieved when α = 1/2 and γ = L−1.
6 EXPERIMENTS
In this section we verify the robustness and convergence of Tmean aggregation rules by experiments. We use a multi-layer perceptron with two hidden layers to handwritten digits classification on the MNIST data set, in m = 20 worker processes, we repeat each experiment for ten times and take the average value. Table 1 shows the detailed network structures of the MLP used in our experiments. Then, we conduct recognition on convolutional neural network in the Cifar10 dataset, repeat each experiment for ten times and report the averaged value. In our experiments, we use Mean(·) without Byzantine attack as a reference. We use top-1 or top-3 accuracy on testing sets as the evaluation metrics. Table 2 lists the details of the data set and the default hyper-parameters for the corresponding model. Our experiments are implemented by Python 3.7.4 with packages TensorFlow 2.4.1 and NumPy 1.19.2, on a physical node Intel i7-9700 CPU with 32 GB of memory and two NVIDIA GeForce RTX 1080 Ti.
6.1 EXPERIMENT SETTING
In our experiment setting, 6 out of the 20 vectors are Byzantine vectors (i.e., m = 20, q = 6), different values of b will affect different convergence performance. For the MNIST dataset, we take b = 6 and b = 8 to train the experiment; and we take b = 8 on the Cifar10 dataset to conduct recognition task.
6.2 GAUSSIAN ATTACK
In Gaussian attack experiment, We use the Gaussian random vector with zero mean and isotopic covariance matrix with standard deviation 200 instead of some correct gradient vectors.
From Table 3 for the MNIST dataset, we find that compared with Mean(·) without Byzantine, Tmeanb(·) works well, and is more robust when b = 8. However, without trimming the data, Mean(·) is not robust under Byzantine attack. The method of Krum(·) is also robust, while it is relatively weaker than Tmean(·). For the Cifar10 dataset, we set b = 8. From Table 4, the aggregation rule based on the Mean(·) is not as robust as other aggregation rules, but after trimming the Mean(·), the output is more robust than before.
6.3 OMNISCIENT ATTACK
Assuming that the attackers (Byzantine nodes) know all the correct gradients information. For each Byzantine gradient vector, all correct gradients of the gradient are replaced by their negative sum. In other words, this attack attempts to make the parameter server go into the opposite direction.
For the MNIST dataset, we see from Table 5 that when the Byzantine node makes the gradient update direction deviate from the maximum actual update direction, Mean(·) cannot be robust. On the contrary, Krum(·) is more robust, while Tmean(·) shows only weak robustness. Because trim is a part of the data intercepted from the mean value for aggregation, it removes the head and tail parts of the entire data set; such that it does not decrease much compared to the overall set mean accuracy when it is attacked by Byzantine.
For the Cifar10 dataset, Table 6 shows that Krum(·) is more robust than others. Mean-based aggregation behaved badly in data robustness under omniscient attack. If we trim Mean(·), it can be clearly seen that, the output becomes more robust. However, under this attack, Tmean(·) is not as robust as Krum(·).
6.4 GENERAL ATTACK WITH MULTIPLE SERVERS
We evaluate the robust aggregation rules under a more general and realistic type of attack. It is very popular to partition the parameters into disjoint subsets, and use multiple server nodes to store and aggregate them. We assume that the parameters are evenly partitioned and assigned to the server nodes. The attacker picks one server, and manipulates any floating number by multiplying −1020 with probability of 0.05%. Because the attacker randomly manipulates the values, with the goal that in some iterations the assumptions/prerequisites of the robust aggregation rules are broken, which crashes the training. Such an attack requires less global information, and can be concentrated on one single server, which makes it more realistic and easier to implement.
For the MNIST dataset, in Table 7 we evaluate the performance of all robust aggregation rules under multiple serves attack. The number of servers is 20. For Krum(·) and Mean(·), we set the estimated Byzantine number q = 6. We find that Krum(·) and not passed. The intercepted Mean(·) aggregation rules cannot be robust. In this case, they cannot converge to the global optimum. On the other hand, for Tmean(·), different values of b will affect different convergence performance. And if we take b = 6, it can also behave robustly.
For the Cifar10 dataset in Table 8, we can find the advantage of the Tmean(·). Neither Krum(·) nor Mean(·) is robust, while Tmean(·) can make the learning robust.
6.5 EXPERIMENT CONCLUSION
From the above experiments, we find that it is difficult for Mean(·) to ensure the robustness of the data when it is attacked by any Byzantine node. When multiple server nodes are used at the same time, Krum(·) also becomes vulnerable. However, after the data is properly trimmed, Tmean(·) can maintain data robustness. Under omniscient attacks, Tmean(·) is not as robust as Krum(·).But Tmean(·) still improves a lot compared to Mean(·) in this case. The data robustness of Tmean(·) is also related to the value of b. We find that when the value of b is closer to m/2, it can achieve stronger robustness.
7 CONCLUSIONS
We analyzed the aggregation rules of Tmean(·). We demonstrated that the effectiveness of our approaches can make the FL model robust to Byzantine attacks. We used three different Byzantine node attacks to show that the original data set can be more robustly handled after partial data trimming and averaging operations.
This work focuses on Tmean(·). In future work we plan to refine the trimming range of Tmean(·) and prove its convergence in a non-convex environment. At the same time, we will add momentum on the basis of Tmean(·), or add some constraints to strengthen its robustness. | 1. What is the focus of the paper regarding Byzantine resilient distributed learning?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. Do you have any concerns or questions about the theoretical convergence analysis?
4. How does the reviewer assess the novelty and significance of the paper's contribution?
5. Are there any minor issues or suggestions for improvement in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper considers the trimmed mean function as the aggregation rule for the byzantine resilient distributed learning. The authors provide a theoretical convergence for strongly convex objectives. Besides, the authors empirically compare trimmed mean with krum and average.
Review
Major Issues
The trimmed mean studied in this paper has already been thoroughly analyzed in (Yin et al, 2018). Their analysis is much more general, including strongly convex, non-strongly convex, and non-convex case whereas this paper only prove for strongly convex case. Besides, Yin et al. (2018) also presented a near optimal statistical performance while this paper did not, e.g. theorem 2 does not reflect the actual number of Byzantine worker
q
influence the rate.
The studied attack is very weak. The proposed method is not robust against SOTA attacks (Xie et al., 2020; Baruch et al., 2019).
Lack of baselines. There are abundant aggregation rules for Byzantine resilient learning. Even the simple median based algorithms are not thoroughly compared.
Obvious mistakes
There are many obvious mistakes in the paper
Last line of page 1, why is the geometric median a variant of mean-based aggregation rule?
Page 3, "Based on these attack methods, there exist defense methods, such as robust aggregation, secure aggregation, encryption, etc". The secure aggregation and encryption are not defense to these attacks.
Theorem 2,
0
<
L
≤
μ
?
Minor Issues
The reference style should be improved
e.g. paragraph 2 of page 1: "...state machine replication strategy (Alistarh et al. (2018))" should be "...state machine replication strategy (Alistarh et al., 2018)".
"... on the data model Bottou (2010)" should be "... on the data model (Bottou 2010)"
"This is the problem not fully addressed in previous works" should be "This problem is not fully addressed in previous works"
Page 1 paragraph 3, the "single miner attack" and "multiple server attacks" seem not very common, could you provide definition or some explanation?
Figure 1 (a) "Master serves" => "Master servers", caption "miner serves" => "miner servers".
Page 3, "Zhao et al. expanded" no year information.
Page 3, "gradient leakage (Bagdasaryan et al. (2020)", however, the reference is backdoor attack, not the graident leakage attack.
Page 4, Definition 1 needs reformulated.
In many occurances, the authors says they trimmed the data while they only trimmed the gradient.
====
Reference
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning, pp. 5650–5659. PMLR, 2018.
Xie C, Koyejo O, Gupta I. Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation[C]//Uncertainty in Artificial Intelligence. PMLR, 2020: 261-270.
Baruch M, Baruch G, Goldberg Y. A little is enough: Circumventing defenses for distributed learning[J]. arXiv preprint arXiv:1902.06156, 2019. |
ICLR | Title
Active Neural Localization
Abstract
Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose “Active Neural Localizer”, a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to localize accurately while minimizing the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model’s capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine.
1 INTRODUCTION
Localization is the problem of estimating the position of an autonomous agent given a map of the environment and agent observations. The ability to localize under uncertainity is required by autonomous agents to perform various downstream tasks such as planning, exploration and targetnavigation. Localization is considered as one of the most fundamental problems in mobile robotics (Cox & Wilfong, 1990; Borenstein et al., 1996). Localization is useful in many real-world applications such as autonomous vehicles, factory robots and delivery drones.
In this paper we tackle the global localization problem where the initial position of the agent is unknown. Despite the long history of research, global localization is still an open problem, and there are not many methods developed which can be learnt from data in an end-to-end manner, instead typically requiring significant hand-tuning and feature selection by domain experts. Another limitation of majority of localization approaches till date is that they are passive, meaning that they passively estimate the position of the agent from the stream of incoming observations, and do not have the ability to decide the actions taken by the agent. The ability to decide the actions can result in faster as well as more accurate localization as the agent can learn to navigate quickly to unambiguous locations in the environment.
We propose “Active Neural Localizer”, a neural network model capable of active localization using raw pixel-based observations and a map of the environment12. Based on the Bayesian filtering algorithm for localization (Fox et al., 2003), the proposed model contains a perceptual model to estimate the likelihood of the agent’s observations, a structured component for representing the belief, multiplicative interactions to propagate the belief based on observations and a policy model over the current belief to localize accurately while minimizing the number of steps required for localization. The entire model is fully differentiable and trained using reinforcement learning, allowing the perceptual model and the policy model to be learnt simultaneously in an end-to-end fashion. A variety
1 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization 2 The code is available at https://github.com/devendrachaplot/Neural-Localization
of 2D and 3D simulation environments are used for testing the proposed model. We show that the Active Neural Localizer is capable of generalizing to not only unseen maps in the same domain but also across domains.
2 RELATED WORK
Localization has been an active field of research since more than two decades. In the context of mobile autonomous agents, Localization can be refer to two broad classes of problems: Local localization and Global localization. Local localization methods assume that the initial position of the agent is known and they aim to track the position as it moves. A large number of localization methods tackle only the problem of local localization. These include classical methods based on Kalman Filters (Kalman et al., 1960; Smith et al., 1990) geometry-based visual odometry methods (Nistér et al., 2006) and most recently, learning-based visual odometry methods which learn to predict motion between consecutive frames using recurrent convolutional neural networks (Clark et al., 2017b; Wang et al., 2017). Local localization techniques often make restrictive assumptions about the agent’s location. Kalman filters assume Gaussian distributed initial uncertainty, while the visual odometry-based methods only predict the relative motion between consecutive frames or with respect to the initial frame using camera images. Consequently, they are unable to tackle the global localization problem where the initial position of the agent is unknown. This also results in their inability to handle localization failures, which consequently leads to the requirement of constant human monitoring and interventation (Burgard et al., 1998).
Global localization is more challenging than the local localization problem and is also considered as the basic precondition for truly autonomous agents by Burgard et al. (1998). Among the methods for global localization, the proposed method is closest to Markov Localization (Fox, 1998). In contrast to local localization approaches, Markov Localization computes a probability distribution over all the possible locations in the environment. The probability distribution also known as the belief is represented using piecewise constant functions (or histograms) over the state space and propagated using the Markov assumption. Other methods for global localization include Multi-hypothesis Kalman filters (Cox & Leonard, 1994; Roumeliotis & Bekey, 2000) which use a mixture of Gaussians to represent the belief and Monte Carlo Localization (Thrun et al., 2001) which use a set of samples (or particles) to represent the belief.
All the above localization methods are passive, meaning that they aren’t capable of deciding the actions to localize more accurately and efficiently. There has been very little research on active localization approaches. Active Markov Localization (Fox et al., 1998) is the active variant of Markov Localization where the agent chooses actions greedily to maximize the reduction in the entropy of the belief. Jensfelt & Kristensen (2001) presented the active variant of Multi-hypothesis Kalman filters where actions are chosen to optimise the information gathering for localization. Both of these methods do not learn from data and have very high computational complexity. In contrast, we demonstrate that the proposed method is several order of magnitudes faster while being more accurate and is capable of learning from data and generalizing well to unseen environments.
Recent work has also made progress towards end-to-end localization using deep learning models. Mirowski et al. (2016) showed that a stacked LSTM can do reasonably well at self-localization. The model consisted of a deep convolutional network which took in at each time step state observations, reward, agent velocity and previous actions. To improve performance, the model also used several auxiliary objectives such as depth prediction and loop closure detection. The agent was successful at navigation tasks within complex 3D mazes. Additionally, the hidden states learned by the models were shown to be quite accurate at predicting agent position, even though the LSTM was not explicitly trained to do so. Other works have looked at doing end-to-end relocalization more explicitly. One such method, called PoseNet (Kendall et al., 2015), used a deep convolutional network to implicitly represent the scene, mapping a single monocular image to a 3D pose (position and orientation). This method is limited by the fact that it requires a new PoseNet trained on each scene since the map is represented implicitly by the convnet weights, and is unable to transfer to scenes not observed during training. An extension to PoseNet, called VidLoc (Clark et al., 2017a), utilized temporal information to make more accurate estimates of the poses by passing a Bidirectional LSTM over each monocular image in a sequence, enabling a trainable smoothing filter over the pose estimates. Both these methods lack a straightforward method to utilize past map data to do localization in a new environment. In contrast, we demonstrate our method is capable of generalizing to new maps that were not previously seen during training time.
3 BACKGROUND: BAYESIAN FILTERING
Bayesian filters (Fox et al., 2003) are used to probabilistically estimate a dynamic system’s state using observations from the environment and actions taken by the agent. Let yt be the random variable representing the state at time t. Let st be the observation received by the agent and at be the action taken by the agent at time step t. At any point in time, the probability distribution over yt conditioned over past observations s1:t−1 and actions a1:t−1 is called the belief, Bel(yt) = p(yt|s1:t−1, a1:t−1) The goal of Bayesian filtering is to estimate the belief sequentially. For the task of localization, yt represents the location of the agent, although in general it can represent the state of the any object(s) in the environment. Under the Markov assumption, the belief can be recursively computed using the following equations:
Bel(yt) = ∑ yt−1 p(yt|yt−1, at−1)Bel(yt−1), Bel(yt) = 1 Z Lik(st)Bel(yt),
where Lik(st) = p(st|yt) is the likelihood of observing st given the location of the agent is yt, and Z = ΣytLik(st)Bel(yt) is the normalization constant. The likelihood of the observation, Lik(st) is given by the perceptual model and p(yt|yt−1, at−1), i.e. the probability of landing in a state yt from yt−1 based on the action, at−1, taken by the agent is specified by a state transition function, ft. The belief at time t = 0, Bel(y0), also known as the prior, can be specified based on prior knowledge about the location of the agent. For global localization, prior belief is typically uniform over all possible locations of the agent as the agent position is completely unknown.
4 METHODS
4.1 PROBLEM FORMULATION
Let st be the observation received by the agent and at be the action taken by the agent at time step t. Let yt be a random variable denoting the state of the agent, that includes its x-coordinate, y-coordinate and orientation. In addition to agent’s past observations and actions, a localization algorithm requires some information about the map, such as the map design. Let the information about the map be denoted by M . In the problem of active localization, we have two goals: (1) Similar to the standard state estimation problem in the Bayesian filter framework, the goal is to estimate the belief, Bel(yt), or the probability distribution of yt conditioned over past observations and actions and the information about the map, Bel(yt) = p(yt|s1:t, a1:t−1,M), (2) We also aim to learn a policy π(at|Bel(yt)) for localizing accurately and efficiently.
4.2 PROPOSED MODEL
Representation of Belief and Likelihood Let yt be a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Let M × N be the map size, and O be the number of possible orientations of the agent. Then, Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O]. Belief is represented as an O ×M ×N tensor, where (i, j, k)th element denotes the belief of agent being in the corresponding state, Bel(yt = i, j, k). This kind of grid-based representation of belief is popular among localization methods as if offers several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). Let Lik(st) = p(st|yt) be the likelihood of observing st given the location of the agent is yt, The likelihood of an observation in a certain state is also represented by an O ×M ×N tensor, where (i, j, k)th element denotes the likelihood of the current observation, st given that the agent’s state is yt = i, j, k. We refer to these tensors as Belief Map and Likelihood Map in the rest of the paper.
Model Architecture The overall architecture of the proposed model, Active Neural Localizer (ANL), is shown in Figure 1. It has two main components: the perceptual model and the policy model. At each timestep t, the perceptual model takes in the agent’s observation, st and outputs the Likelihood Map Lik(st). The belief is propagated through time by taking an element-wise dot product with the Likelihood Map at each timestep. Let Bel(yt) be the Belief Map at time t before observing st. Then the belief, after observing st, denoted by Bel(yt), is calculated as follows:
Bel(yt) = 1
Z Bel(yt) Lik(st)
where denotes the Hadamard product, Z = ∑ yt Lik(st)Bel(yt) is the normalization constant.
The Belief Map, after observing st, is passed through the policy model to obtain the probability of taking any action, π(at|Bel(yt)). The agent takes an action at sampled from this policy. The Belief Map at time t+ 1 is calculated using the transition function (fT ), which updates the belief at each location according to the action taken by the agent, i.e. p(yt+1|yt, at). The transition function is similar to the egomotion model used by Gupta et al. (2017) for mapping and planning. For ‘turn left’ and ‘turn right’ actions, the transition function just swaps the belief in each orientation. For the the ‘move forward’ action, the belief values are shifted one unit according to the orientation. If the next unit is an obstacle, then the value doesn’t shift, indicating a collison (See Appendix B for more details).
4.3 MODEL COMPONENTS
Perceptual Model The perceptual model computes the feature representation from the agent’s observation and the states given in the map information. The likelihood of each state in the map information is calculated by taking the cosine similarity of the feature representation of the agent’s observation with the feature representation of the state. Cosine similarity is commonly used for computing the similarity of representations (Nair & Hinton, 2010; Huang et al., 2013) and has also been used in the context on localization (Chaplot et al., 2016). The benefits of using cosine similarity over dot-product have been highlighted by Chunjie et al. (2017).
In the 2D environments, the observation is used to compute a one-hot vector of the same dimension representing the depth which is used as the feature representation directly. This resultant Likelihood map has uniform non-zero probabilities for all locations having the observed depth and zero probabilities everywhere else. For the 3D environments, the feature representation of each observation is obtained using a trainable deep convolutional network (LeCun et al., 1995) (See Appendix B for architecture details). Figure 2 shows examples of the agent observation and the corresponding Likelihood Map computed in both 2D and 3D environments. The simulation environments are described in detail in Section 5.1.
Policy Model The policy model gives the probablity of the next action given the current belief of the agent. It is trained using reinforcement learning, specifically Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016) algorithm (See Appendix A for a brief background on reinforcement learning). The belief map is stacked with the map design matrix and passed through 2 convolutional layers followed by a fully-connected layer to predict the policy as well as the value function. The policy and value losses are computed using the rewards observed by the agent and backpropagated through the entire model (See Appendix B for architecture details).
5 EXPERIMENTS
As described in Section 4, agent’s state, yt is a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O], where M ×N is the map size, and O be the number of possible orientations of the agent. We use a variety of domains for our experiments. The values of M and N vary accross domains but O = 4 is fixed. The possible actions in all domains are ‘move forward’, ‘turn left’ and ‘turn right’. The turn angle is fixed at (360/O = 90). This ensures that the agent is always in one of the 4 orientations, North, South, East and West. Note that although we use, O = 4 in all our experiments, our method is defined for any value of O. At each time step, the agent receives an intermediate reward equal to the maximum probability of being in any state, rt = maxyt(Bel(yt)). This encourages the agent the reduce the entropy of the Belief Map in order to localize as fast as possible. We observed that the agent converges to similar performance without introducing the intermediate reward, but it helps in speeding up training. At the end of the episode, the prediction is the state with the highest probability in the Belief Map. If the prediction is correct, i.e. y∗ = arg maxyt Bel(yt) where y∗ is the true state of the agent, then the agent receives a positive reward of 1. Please refer to Appendix B for more details about training and hyper-parameters. The metric for evaluation is accuracy (Acc) which refers to the ratio of the episodes where the agent’s prediction was correct over 1000 episodes. We also report the total runtime of the method in seconds taken to evaluate 1000 episodes.
5.1 SIMULATION ENVIRONMENTS
Maze 2D In the Maze2D environment, maps are represented by a binary matrix, where 0 denotes an obstacle and 1 denotes free space. The map designs are generated randomly using Kruskal’s algorithm Kruskal (1956). The agent’s observation in 2D environments is the series of pixels in front of the agent. For a map size of M ×N , the agent’s observation is an array of size max(M,N) containing pixels values in front of the agent. The view of the agent is obscured by obstacles, so all pixel values behind the first obstacle are treated as 0. The information about the map, M , received by the agent is the matrix representing the map design. Note that the observation at any state in the map can be computed using the map design. The top row in Figure 2 shows examples of map design and agent’s observation in this environment.
The 2D environments provide ideal conditions for Bayesian filtering due to lack of observation or motion noise. The experiments in the 2D environments are designed to evaluate and quantify the effectiveness of the policy learning model in ideal conditions. The size of the 2D environments can also be varied to test the scalability of the policy model. This design is similar to previous experimental settings such as by Tamar et al. (2016) and Karkus et al. (2017) for learning a target-driven navigation policy in grid-based 2D environments.
3D Environments In the 3D environments, the observation is an RGB image of the first-person view of the world as seen by the agent. The x-y coordinates of the agent are continuous variables,
unlike the discrete grid-based coordinates in the 2D environments. The matrix denoting the belief of the agent is discretized meaning each pixel in the Belief map corresponds to a range of states in the environment. At the start of every epsiode, the agent is spawned at a random location in this continuous range as opposed to a discrete location corresponding to a pixel in the belief map for 2D environments. This makes localization much more challenging than the 2D envrionments. Apart from the map design, the agent also receives a set of images of the visuals seen by the agent at a few locations uniformly placed around the map in all 4 orientations. These images, called memory images, are required by the agent for global localization. They are not very expensive to obtain in real-world environments. We use two types of 3D environments:
Maze3D: Maze3D consists of virtual 3D maps built using the Doom Game Engine. We use the ViZDoom API (Kempka et al., 2016) to interact with the gane engine. The map designs are generated using Kruskal’s algorithm and Doom maps are created based on these map designs using Action Code Scripts3. The design of the map is identical to the Maze2D map designs with the difference that the paths are much thicker than the walls as shown in Figure 2. The texture of each wall, floor and ceiling can be controlled which allows us to create maps with different number of ‘landmarks’. Landmarks are defined to be walls with a unique texture.
Unreal3D: Unreal3D is a photo-realistic simulation environment built using the Unreal Game Engine. We use the AirSim API (Shah et al., 2017) to interact with the game engine. The environment consists of a modern office space as shown in Figure 2 obtained from the Unreal Engine Marketplace4.
The 3D environments are designed to test the ability of the proposed model to jointly learn the perceptual model along with the policy model as the agent needs to handle raw pixel based input while learning a policy. The Doom environment provides a way to test the model in challenging ambiguous environments by controlling the number of landmarks in the environment, while the Unreal Environment allows us to evaluate the effectiveness of the model in comparatively more realistic settings.
5.2 BASELINES
Markov Localization (Fox, 1998) is a passive probabilistic approach based on Bayesian filtering. We use a geometric variant of Markov localization where the state space is represented by fine-grained, regularly spaced grid, called position probability grids (Burgard et al., 1996), similar to the state space in the proposed model. Grid-based state space representations is known to offer several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). In the passive localization approaches actions taken by the agent are random.
Active Markov Localization (AML) (Fox et al., 1998) is the active variant of Markov Localization where the actions taken by the agent are chosen to maximize the ratio of the ‘utility’ of the action to the ‘cost’ of the action. The ‘utility’ of an action a at time t is defined as the expected reduction in the uncertainity of the agent state after taking the action a at time t and making the next observation
3 https://en.wikipedia.org/wiki/Action_Code_Script 4 https://www.unrealengine.com/marketplace/small-office-prop-pack
at time t + 1: Ut(a) = H(yt) − Ea[H(yt+1)], where H(y) denotes the entropy of the belief: H(y) = − ∑ y Bel(y) logBel(y), and Ea[H(yt+1)] denotes the expected entropy of the agent after taking the action a and observing yt+1. The ‘cost’ of an action refers to the time needed to perform the action. In our environment, each action takes a single time step, thus the cost is constant.
We define a generalized version of the AML algorithm. The utility can be maximized over a sequence of actions rather than just a single action. Let a∗ ∈ Anl be the action sequence of length nl that maximizes the utility at time t, a∗ = arg maxa Ut(a) (where A denotes the action space). After computing a∗, the agent need not take all the actions in a∗ before maximizing the utility again. This is because new observations made while taking the actions in a∗ might affect the utility of remaining actions. Let ng ∈ {1, 2, ..., nl} be the number of actions taken by the agent, denoting the greediness of the algorithm. Due to the high computational complexity of calculating utility, the agent performs random action until belief is concentrated on nm states (ignoring beliefs under a certain threshold). The complexity of the generalized AML is O(nm(nl − ng)|A|nl). Given sufficient computational power, the optimal sequence of actions can be calculated with nl equal to the length of the episode, ng = 1, and nm equal to the size of the state space.
In the original AML algorithm, the utility was maximized over single actions, i.e. nl = 1 which also makes ng = 1. The value of nm used in their experiments is not reported, however they show an example with nm = 6. We run AML with all possible combination of values of nl ∈ {1, 5, 10, 15}, ng ∈ {1, nl} and nm = {5, 10} and define two versions: (1) Active Markov Localization (Fast): Generalized AML algorithm using the values of nl, ng, nm that maximize the performance while keeping the runtime comparable to ANL, and (2) Active Markov Localization (Slow): Generalized AML algorithm using the values of nl, ng, nm which maximize the performance while keeping the runtime for 1000 episodes below 24hrs (which is the training time of the proposed model) in each environment (See Appendix B for more details on the implementation of AML).
The perceptual model for both Markov Localization and Active Markov Localization needs to be specified separately. For the 2D environments, the perceptual model uses 1-hot vector representation of depth. For the 3D Environments, the perceptual model uses a pretrained Resnet-18 (He et al., 2016) model to calculate the feature representations for the agent observations and the memory images.
5.3 RESULTS
2D Environments For the Maze2D environment, we run all models on mazes having size 7× 7, 15× 15 and 21× 21 with varying episode lengths. We train all the models on randomly generated mazes and test on a fixed set of 1000 mazes (different from the mazes used in training). The results on the Maze2D environment are shown in Table 1. As seen in the table, the proposed model, Active Neural Localization, outperforms all the baselines on an average. The proposed method achieves a higher overall accuracy than AML (Slow) while being 100 times faster. Note that the runtime of AML for 1000 episodes is comparable to the total training time of the proposed model. The long runtime of AML (Slow) makes it infeasible to be used in real-time in many cases. When AML has comparable runtime, its performance drops by about 37% (AML (Fast)). We also observe that the difference in the performance of ANL and baselines is higher for smaller episode lengths. This indicates that ANL is more efficient (meaning it requires fewer actions to localize) in addition to being more accurate.
3D Environments All the mazes in the Maze3D environment are of size 70×70 while the office environment environment is of size 70×50. The agent location is a continuous value in this range. Each cell roughly corresponds to an area of 40cm×40cm in the real world. The set of memory images correspond to only about 6% of the total states. Likelihood of rest of the states are obtained by bilinear smoothing. All episodes have a fixed length of 30 actions. Although the size of the Office Map is 70×50, we represent Likelihood and Belief by a 70×70 in order to transfer the model between
Active Neural Localization
Time 297 300 300 300 300 301 2750 2699 905.9 2756 Acc 0.889 0.859 0.852 0.858 0.839 0.871 0.934 0.505 0.826 0.921
both the 3D environments for domain adaptation. We also add a Gaussian noise of 5% standard deviation to all translations in 3D environments.
In the Maze3D environment, we vary the difficulty of the environment by varying the number of landmarks in the environment. Landmarks are defined to be walls with a unique texture. Each landmark is present only on a single wall in a single cell in the maze grid. All the other walls have a common texture making the map very ambiguous. We expect landmarks to make localization easier as the likelihood maps should have a lower entropy when the agent visits a landmark, which consequently should reduce the entropy of the Belief Map. We run experiments with 10, 5 and 0 landmarks. The textures of the landmarks are randomized during training. This technique of domain randomization has shown to be effective in generalizing to unknown maps within the simulation environment (Lample & Chaplot, 2016) and transferring from simulation to real-world (Tobin et al., 2017). In each experiment, the agent is trained on a set of 40 mazes and evaluated in two settings: (1) Unseen mazes with seen textures: the textures of each wall in the test set mazes have been seen in the training set, however the map design of the test set mazes are unseen and (2) Unseen mazes with unseen textures: both the textures and the map design are unseen. We test on a set of 20 mazes for each evaluation setting. Figure 3 shows examples for both the settings.
In the Unreal3D environment, we test the effectiveness of the model in adapting to dynamic lightning changes. We modified the the Office environment using the Unreal Game Engine Editor to create two scenarios: (1) Lights: where all the office lights are switched on; (2) NoLights: where all the office lights are switched off. Figure 3 shows sample agent observations with and without lights at the same locations. To test the model’s ability to adapt to dynamic lighting changes, we train the model on the Office map with lights and test it on same map without lights. The memory images provided to the agent are taken while lights are switched on. Note that this is different as compared to the experiments on unseen mazes in Maze3D environment, where the agent is given memory images of the unseen environments.
The results for the 3D environments are shown in Table 2 and an example of the policy execution is shown in Figure 45. The proposed model significantly outperforms all baseline models in all evaluation settings with the lowest runtime. We see similar trends of runtime and accuracy trade-off between the two version of AML as seen in the 2D results. The absolute performance of AML (Slow) is rather poor in the 3D environments as compared to Maze2D. This is likely due to the decrease in value of look-ahead parameter, nl, to 3 and the increase in value of the greediness hyper-parameter, ng to 3, as compared to nl = 5, ng = 1 in Maze 2D, in order to ensure runtimes under 24hrs.
The ANL model performs better on the realistic Unreal environment as compared to Maze3D environment, as most scenes in the Unreal environment consists of unique landmarks while Maze3D environment consists of random mazes with same texture except those of the landmarks. In the Maze3D environment, the model is able to generalize well to not only unseen map design but also to unseen textures. However, the model doesn’t generalize well to dynamic lighting changes in the Unreal3D environment. This highlights a current limitation of RGB image-based localization approaches as compared to depth-based approaches, as depth sensors are invariant to lighting changes.
5 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization
Domain Adaptation We also test the ability of the proposed model to adapt between different simulation environments. The model trained on the Maze3D is directly tested on the Unreal3D Office Map without any fine-tuning. The results in Table 2 show that the model is able to generalize well to Unreal environment from the Doom Environment. We believe that the policy model generalizes well because the representation of belief and map design is common in all environments and policy model is based only on the belief and the map design, while the perceptual model generalizes well because it has learned to measure the similarity of the current image with the memory images as it was trained on environments with random textures. This property is similar to siamese networks used for one-shot image recognition (Koch et al., 2015).
6 CONCLUSION
In this paper, we proposed a fully-differentiable model for active global localization which uses structured components for Bayes filter-like belief propagation and learns a policy based on the belief to localize accurately and efficiently. This allows the policy and observation models to be trained jointly using reinforcement learning. We showed the effectiveness of the proposed model on a variety of challenging 2D and 3D environments including a realistic map in the Unreal environment. The results show that our model consistently outperforms the baseline models while being order of magnitudes faster. We also show that a model trained on random textures in the Doom simulation environment is able to generalize to photo-realistic Office map in the Unreal simulation environment. While this gives us hope that model can potentially be transferred to real-world environments, we leave that for future work. The limitation of the model to adapt to dynamic lightning can potentially be tackled by training the model with dynamic lightning in random mazes in the Doom environment. There can be several extensions to the proposed model too. The model can be combined with Neural Map (Parisotto & Salakhutdinov, 2017) to train an end-to-end model for a SLAM-type system and the architecture can also be utilized for end-to-end planning under uncertainity.
ACKNOWLEDGEMENTS
This work was supported by Apple, IARPA DIVA award D17PC00340, and ONR award N000141512791. The authors would also like to thank Jian Zhang for helping with the Unreal environment and the NVidia for donating a DGX-1 deep learning machine and providing GPU support.
A BACKGROUND: REINFORCEMENT LEARNING
In the standard Reinforcement Learning Sutton & Barto (1998) setting, at each time step t, an agent receives a observation, st, from the environment, performs an action at and receives a reward rt. The goal is to learn a policy π(a|s) which maximizes the expected return or the sum of discounted rewards Rt = ΣTt′=tγ
t′−trt′ , where T is the time at which the episode terminates, and γ ∈ [0, 1] is a discount factor that determines the importance of future rewards.
Reinforcement learning methods can broadly be divided into value-based methods and policy-based methods. Policy-based methods parametrize the policy function which can be optimized directly to maximize the expected return (E[Rt]) Sutton et al. (2000). While policy-based methods suffer from high variance, value-based methods typically use temporal difference learning which provides low variance estimates of the expected return. Actor-Critic methods (Barto et al., 1983; Sutton, 1984; Konda & Tsitsiklis, 1999) combine the benefits of both value-based methods by estimating both the value function, V π(st; θv), as well as the policy function π(at|st; θ)(Grondman et al., 2012). REINFORCE family of algorithms (Williams, 1992) are popular for optimizing the policy function, which updates the policy parameters θ in the direction of ∇θ log π(at|st; θ)Rt. Since this update is an unbiased estimate of∇θE[Rt], its variance can be reduced by subtracting a baseline function, bt(st) from the expected return (∇θ log π(at|st; θ)(Rt − bt(st))). When the estimate of the value function (V π(st)) is used as the baseline, the resultant algorithm is called Advantage Actor-Critic, as the resultant policy gradient is scaled by the estimate of the advantage of the action at in state st, A(at, st) = Q(at, st)− V (st). The Asynchronous Advantage Actor-Critic algorithm (Mnih et al., 2016) uses a deep neural network to parametrize the policy and value functions and runs multiple parallel threads to update the network parameters.
In this paper, we use the A3C algorithm for all our experiments. We also use entropy regularization for improved exploration as described by (Mnih et al., 2016). In addition, we use the Generalized Advantage Estimator (Schulman et al., 2015) to reduce the variance of the policy gradient updates.
B IMPLEMENTATION DETAILS
B.1 MODEL ARCHITECTURE DETAILS
The perceptual model for the 3D Environments receives RGB images of size 108x60. It consists of 2 Convolutional Layers. The first convolutional layer contains 32 filters of size 8x8 and stride of 4. The second convolutional layer contains 64 filters of size 4x4 with a stride of 2. The convolutional layers are followed by a fully-connected layer of size 512. The output of this fully-connected layer is used as the representation of the image while constructing the likelihood map. Figure 5 shows the architecture of the perceptual model in 3D environments. This architecture is adapted from previous work which is shown to perform well at playing deathmatches in Doom (Chaplot & Lample, 2017).
The policy model consists of two convolutional layers too. For the 2D environments, both the convolutional layers contain 16 filters of size 3 with stride of 1. For the 3D environments, the first convolutional layer contains 16 filters of size 7x7 with a stride of 3 and the second convolutional layer contains 16 filters of size 3x3 with a stride of 1. The convolutional layers are followed by a fully-connected layer of size 256. Figure 6 shows the architecture of the policy model in 3D environments.
We add action histroy of length 5 (last 5 actions) as well as the current time step as input to the policy model. We observed that action history input avoids the agent being stuck in alternating ‘turn left’ and ‘turn right’ actions whereas time step helps in accurately predicting the value function as the episode lengths are fixed in each environment. Each action in the action history as well as the current timestep are passed through an Embedding Layer to get an embedding of size 8. The embeddings of all actions and the time step are contacted with the 256-dimensional output of the fully-connected layer. The resultant vector is passed through two branches of single fully-connected layers to get the policy (actor layer with 3 outputs) and the value function (critic layer with 1 output).
B.2 HYPER-PARAMETERS AND TRAINING DETAILS
All the models are trained with A3C using Stochastic Gradient Descent with a learning rate of 0.001. We use 8 threads for 2D experiments and 4 threads for 3D experiments. Each thread performed an A3C update after 20 steps. The weight for entropy regularization was 0.01. The discount factor (γ) for reinforcement learning was chosen to be 0.99. The gradients were clipped at 40. All models are trained for 24hrs of wall clock time. All the 2D experiments (including evaluation runtime benchmarks for baselines) were run on Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz and all the 3D experiments were run on Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz. While all the A3C training threads ran on CPUs, the Unreal engine also utilized a NVidia GeForce GTX 1080 GPU. The model with the best performance on the training environment is used for evaluation.
B.3 TRANSITION FUNCTION
The transition function transforms the belief according to the action taken by the agent. For turn actions, the beliefs maps in each orientation are swapped according to the direction of the turn. For the move forward action, all probability values move one cell in the orientation of the agent, except those which are blocked by a wall (indicating a collision). Figure 7 shows sample outputs of the transition function given previous belief and action taken by the agent.
B.4 IMPLEMENTATION DETAILS OF ACTIVE MARKOV LOCALIZATION
In order to make our implementation of generalized AML as efficient as possible, we employ various techniques described by the authors, such as Pre-computation and Selective computation (Fox et al., 1998), along with other techniques such as hashing of expected entropies for action subsequences. The restrictions in runtime led to nl = 1, ng = 1, nm = 5 in both 2D and 3D environments for AML (Fast), nl = 5, ng = 1, nm = 10 in 2D environments for AML (Slow) and nl = 3, ng = 3, nm = 10 in the 3D environments for AML (Slow).
The computation of expected entropies require the expected observation in the future states while rolling out a sequence of action. While it is possible to calculate these in 2D environments with depthbased observations, it is not possible to do this in 3D environments with RGB image observations. However, for comparison purposes we assume that AML has a perfect model of the environment and provide future observations by rolling out the action sequences in the simulation environment. | 1. What is the main contribution of the paper on active localization using RGB images?
2. What are the strengths and weaknesses of the proposed approach, particularly in comparison to traditional methods?
3. How does the method handle measurement uncertainty and process noise in real-world environments?
4. What are the practical benefits of learning the measurement and policy models beyond what structured Bayesian filtering can offer?
5. How does the method balance the advantages of choosing actions that improve localization versus those in the context of a higher-level task?
6. Can the method adapt to larger test environments than those encountered during training?
7. How does the method utilize past map data to perform localization in new environments?
8. What are the implications of the method's assumption of perfect measurements and absence of process noise in physical deployments?
9. How does the method ensure the resulting distribution is consistent, especially in terms of estimation confidence?
10. What is the computational cost of training the model, and how does it compare to baseline methods? | Review | Review
The paper describes a neural network-based approach to active localization based upon RGB images. The framework employs Bayesian filtering to maintain an estimate of the agent's pose using a convolutional network model for the measurement (perception) function. A convolutional network models the policy that governs the action of the agent. The architecture is trained in an end-to-end manner via reinforcement learning. The architecture is evaluated in 2D and 3D simulated environments of varying complexity and compared favorably to traditional (structured) approaches to passive and active localization.
As the paper correctly points out, there is large body of work on map-based localization, but relatively little attention has been paid to decision theoretic formulations to localization, whereby the agent's actions are chosen in order to improve localization accuracy. More recent work instead focuses on the higher level objective of navigation, whereby any effort act in an effort to improve localization are secondary to the navigation objective. The idea of incorporating learned representations with a structured Bayesian filtering approach is interesting, but it's utility could be better motivated. What are the practical benefits to learning the measurement and policy model beyond (i) the temptation to apply neural networks to this problem and (ii) the ability to learn these in an end-to-end fashion? That's not to say that there aren't benefits, but rather that they aren't clearly demonstrated here. Further, the paper seems to assume (as noted below) that there is no measurement uncertainty and, with the exception of the 3D evaluations, no process noise.
The evaluation demonstrates that the proposed method yields estimates that are more accurate according to the proposed metric than the baseline methods, with a significant reduction in computational cost. However, the environments considered are rather small by today's standards and the baseline methods almost 20 years old. Further, the evaluation makes a number of simplifying assumptions, the largest being that the measurements are not subject to noise (the only noise that is present is in the motion for the 3D experiments). This assumption is clearly not valid in practice. Further, it is not clear from the evaluation whether the resulting distribution that is maintained is consistent (e.g., are the estimates over-/under-confident?). This has important implications if the system were to actually be used on a physical system. Further, while the computational requirements at test time are significantly lower than the baselines, the time required for training is likely very large. While this is less of an issue in simulation, it is important for physical deployments. Ideally, the paper would demonstrate performance when transferring a policy trained in simulation to a physical environment (e.g., using diversification, which has proven effective at simulation-to-real transfer).
Comments/Questions:
* The nature of the observation space is not clear.
* Recent related work has focused on learning neural policies for navigation, and any localization-specific actions are secondary to the objective of reaching the goal. It would be interesting to discuss how one would balance the advantages of choosing actions that improve localization with those in the context of a higher-level task (or at least including a cost on actions as with the baseline method of Fox et al.).
* The evaluation that assigns different textures to each wall is unrealistic.
* It is not clear why the space over which the belief is maintained flips as the robot turns and shifts as it moves.
* The 3D evaluation states that a 360 deg view is available. What happens when the agent can only see in one (forward) direction?
* AML includes a cost term in the objective. Did the author(s) experiment with setting this cost to zero?
* The 3D environments rely upon a particular belief size (70 x 70) being suitable for all environments. What would happen if the test environment was larger than those encountered in training?
* The comment that the PoseNet and VidLoc methods "lack a strainghtforward method to utilize past map data to do localization in a new environment" is unclear.
* The environments that are considered are quite small compared to the domains currently considered for
* Minor: It might be better to move Section 3 into Section 4 after introducing notation (to avoid redundancy).
* The paper should be proofread for grammatical errors (e.g., "bayesian" --> "Bayesian", "gaussian" --> "Gaussian")
UPDATES FOLLOWING AUTHORS' RESPONSE
(Apologies if this is a duplicate. I added a comment in light of the authors' response, but don't see it and so I am updating my review for completeness).
I appreciate the authors's response to the initial reviews and thank them for addressing several of my comments.
RE: Consistency
My concerns regarding consistency remain. For principled ways of evaluating the consistency of an estimator, see Bar-Shalom "Estimation with Applications to Tracking and Navigation".
RE: Measurement/Process Noise
The fact that the method assumes perfect measurements and, with the exception of the 3D experiments, no process noise is concerning as neither assumptions are valid for physical systems. Indeed, it is this noise in particular that makes localization (and its variants) challenging.
RE: Motivation
The response didn't address my comments about the lack motivation for the proposed method. Is it largely the temptation of applying an end-to-end neural method to a new problem? The paper should be updated to make the advantages over traditional approaches to active localization. |
ICLR | Title
Active Neural Localization
Abstract
Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose “Active Neural Localizer”, a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to localize accurately while minimizing the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model’s capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine.
1 INTRODUCTION
Localization is the problem of estimating the position of an autonomous agent given a map of the environment and agent observations. The ability to localize under uncertainity is required by autonomous agents to perform various downstream tasks such as planning, exploration and targetnavigation. Localization is considered as one of the most fundamental problems in mobile robotics (Cox & Wilfong, 1990; Borenstein et al., 1996). Localization is useful in many real-world applications such as autonomous vehicles, factory robots and delivery drones.
In this paper we tackle the global localization problem where the initial position of the agent is unknown. Despite the long history of research, global localization is still an open problem, and there are not many methods developed which can be learnt from data in an end-to-end manner, instead typically requiring significant hand-tuning and feature selection by domain experts. Another limitation of majority of localization approaches till date is that they are passive, meaning that they passively estimate the position of the agent from the stream of incoming observations, and do not have the ability to decide the actions taken by the agent. The ability to decide the actions can result in faster as well as more accurate localization as the agent can learn to navigate quickly to unambiguous locations in the environment.
We propose “Active Neural Localizer”, a neural network model capable of active localization using raw pixel-based observations and a map of the environment12. Based on the Bayesian filtering algorithm for localization (Fox et al., 2003), the proposed model contains a perceptual model to estimate the likelihood of the agent’s observations, a structured component for representing the belief, multiplicative interactions to propagate the belief based on observations and a policy model over the current belief to localize accurately while minimizing the number of steps required for localization. The entire model is fully differentiable and trained using reinforcement learning, allowing the perceptual model and the policy model to be learnt simultaneously in an end-to-end fashion. A variety
1 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization 2 The code is available at https://github.com/devendrachaplot/Neural-Localization
of 2D and 3D simulation environments are used for testing the proposed model. We show that the Active Neural Localizer is capable of generalizing to not only unseen maps in the same domain but also across domains.
2 RELATED WORK
Localization has been an active field of research since more than two decades. In the context of mobile autonomous agents, Localization can be refer to two broad classes of problems: Local localization and Global localization. Local localization methods assume that the initial position of the agent is known and they aim to track the position as it moves. A large number of localization methods tackle only the problem of local localization. These include classical methods based on Kalman Filters (Kalman et al., 1960; Smith et al., 1990) geometry-based visual odometry methods (Nistér et al., 2006) and most recently, learning-based visual odometry methods which learn to predict motion between consecutive frames using recurrent convolutional neural networks (Clark et al., 2017b; Wang et al., 2017). Local localization techniques often make restrictive assumptions about the agent’s location. Kalman filters assume Gaussian distributed initial uncertainty, while the visual odometry-based methods only predict the relative motion between consecutive frames or with respect to the initial frame using camera images. Consequently, they are unable to tackle the global localization problem where the initial position of the agent is unknown. This also results in their inability to handle localization failures, which consequently leads to the requirement of constant human monitoring and interventation (Burgard et al., 1998).
Global localization is more challenging than the local localization problem and is also considered as the basic precondition for truly autonomous agents by Burgard et al. (1998). Among the methods for global localization, the proposed method is closest to Markov Localization (Fox, 1998). In contrast to local localization approaches, Markov Localization computes a probability distribution over all the possible locations in the environment. The probability distribution also known as the belief is represented using piecewise constant functions (or histograms) over the state space and propagated using the Markov assumption. Other methods for global localization include Multi-hypothesis Kalman filters (Cox & Leonard, 1994; Roumeliotis & Bekey, 2000) which use a mixture of Gaussians to represent the belief and Monte Carlo Localization (Thrun et al., 2001) which use a set of samples (or particles) to represent the belief.
All the above localization methods are passive, meaning that they aren’t capable of deciding the actions to localize more accurately and efficiently. There has been very little research on active localization approaches. Active Markov Localization (Fox et al., 1998) is the active variant of Markov Localization where the agent chooses actions greedily to maximize the reduction in the entropy of the belief. Jensfelt & Kristensen (2001) presented the active variant of Multi-hypothesis Kalman filters where actions are chosen to optimise the information gathering for localization. Both of these methods do not learn from data and have very high computational complexity. In contrast, we demonstrate that the proposed method is several order of magnitudes faster while being more accurate and is capable of learning from data and generalizing well to unseen environments.
Recent work has also made progress towards end-to-end localization using deep learning models. Mirowski et al. (2016) showed that a stacked LSTM can do reasonably well at self-localization. The model consisted of a deep convolutional network which took in at each time step state observations, reward, agent velocity and previous actions. To improve performance, the model also used several auxiliary objectives such as depth prediction and loop closure detection. The agent was successful at navigation tasks within complex 3D mazes. Additionally, the hidden states learned by the models were shown to be quite accurate at predicting agent position, even though the LSTM was not explicitly trained to do so. Other works have looked at doing end-to-end relocalization more explicitly. One such method, called PoseNet (Kendall et al., 2015), used a deep convolutional network to implicitly represent the scene, mapping a single monocular image to a 3D pose (position and orientation). This method is limited by the fact that it requires a new PoseNet trained on each scene since the map is represented implicitly by the convnet weights, and is unable to transfer to scenes not observed during training. An extension to PoseNet, called VidLoc (Clark et al., 2017a), utilized temporal information to make more accurate estimates of the poses by passing a Bidirectional LSTM over each monocular image in a sequence, enabling a trainable smoothing filter over the pose estimates. Both these methods lack a straightforward method to utilize past map data to do localization in a new environment. In contrast, we demonstrate our method is capable of generalizing to new maps that were not previously seen during training time.
3 BACKGROUND: BAYESIAN FILTERING
Bayesian filters (Fox et al., 2003) are used to probabilistically estimate a dynamic system’s state using observations from the environment and actions taken by the agent. Let yt be the random variable representing the state at time t. Let st be the observation received by the agent and at be the action taken by the agent at time step t. At any point in time, the probability distribution over yt conditioned over past observations s1:t−1 and actions a1:t−1 is called the belief, Bel(yt) = p(yt|s1:t−1, a1:t−1) The goal of Bayesian filtering is to estimate the belief sequentially. For the task of localization, yt represents the location of the agent, although in general it can represent the state of the any object(s) in the environment. Under the Markov assumption, the belief can be recursively computed using the following equations:
Bel(yt) = ∑ yt−1 p(yt|yt−1, at−1)Bel(yt−1), Bel(yt) = 1 Z Lik(st)Bel(yt),
where Lik(st) = p(st|yt) is the likelihood of observing st given the location of the agent is yt, and Z = ΣytLik(st)Bel(yt) is the normalization constant. The likelihood of the observation, Lik(st) is given by the perceptual model and p(yt|yt−1, at−1), i.e. the probability of landing in a state yt from yt−1 based on the action, at−1, taken by the agent is specified by a state transition function, ft. The belief at time t = 0, Bel(y0), also known as the prior, can be specified based on prior knowledge about the location of the agent. For global localization, prior belief is typically uniform over all possible locations of the agent as the agent position is completely unknown.
4 METHODS
4.1 PROBLEM FORMULATION
Let st be the observation received by the agent and at be the action taken by the agent at time step t. Let yt be a random variable denoting the state of the agent, that includes its x-coordinate, y-coordinate and orientation. In addition to agent’s past observations and actions, a localization algorithm requires some information about the map, such as the map design. Let the information about the map be denoted by M . In the problem of active localization, we have two goals: (1) Similar to the standard state estimation problem in the Bayesian filter framework, the goal is to estimate the belief, Bel(yt), or the probability distribution of yt conditioned over past observations and actions and the information about the map, Bel(yt) = p(yt|s1:t, a1:t−1,M), (2) We also aim to learn a policy π(at|Bel(yt)) for localizing accurately and efficiently.
4.2 PROPOSED MODEL
Representation of Belief and Likelihood Let yt be a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Let M × N be the map size, and O be the number of possible orientations of the agent. Then, Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O]. Belief is represented as an O ×M ×N tensor, where (i, j, k)th element denotes the belief of agent being in the corresponding state, Bel(yt = i, j, k). This kind of grid-based representation of belief is popular among localization methods as if offers several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). Let Lik(st) = p(st|yt) be the likelihood of observing st given the location of the agent is yt, The likelihood of an observation in a certain state is also represented by an O ×M ×N tensor, where (i, j, k)th element denotes the likelihood of the current observation, st given that the agent’s state is yt = i, j, k. We refer to these tensors as Belief Map and Likelihood Map in the rest of the paper.
Model Architecture The overall architecture of the proposed model, Active Neural Localizer (ANL), is shown in Figure 1. It has two main components: the perceptual model and the policy model. At each timestep t, the perceptual model takes in the agent’s observation, st and outputs the Likelihood Map Lik(st). The belief is propagated through time by taking an element-wise dot product with the Likelihood Map at each timestep. Let Bel(yt) be the Belief Map at time t before observing st. Then the belief, after observing st, denoted by Bel(yt), is calculated as follows:
Bel(yt) = 1
Z Bel(yt) Lik(st)
where denotes the Hadamard product, Z = ∑ yt Lik(st)Bel(yt) is the normalization constant.
The Belief Map, after observing st, is passed through the policy model to obtain the probability of taking any action, π(at|Bel(yt)). The agent takes an action at sampled from this policy. The Belief Map at time t+ 1 is calculated using the transition function (fT ), which updates the belief at each location according to the action taken by the agent, i.e. p(yt+1|yt, at). The transition function is similar to the egomotion model used by Gupta et al. (2017) for mapping and planning. For ‘turn left’ and ‘turn right’ actions, the transition function just swaps the belief in each orientation. For the the ‘move forward’ action, the belief values are shifted one unit according to the orientation. If the next unit is an obstacle, then the value doesn’t shift, indicating a collison (See Appendix B for more details).
4.3 MODEL COMPONENTS
Perceptual Model The perceptual model computes the feature representation from the agent’s observation and the states given in the map information. The likelihood of each state in the map information is calculated by taking the cosine similarity of the feature representation of the agent’s observation with the feature representation of the state. Cosine similarity is commonly used for computing the similarity of representations (Nair & Hinton, 2010; Huang et al., 2013) and has also been used in the context on localization (Chaplot et al., 2016). The benefits of using cosine similarity over dot-product have been highlighted by Chunjie et al. (2017).
In the 2D environments, the observation is used to compute a one-hot vector of the same dimension representing the depth which is used as the feature representation directly. This resultant Likelihood map has uniform non-zero probabilities for all locations having the observed depth and zero probabilities everywhere else. For the 3D environments, the feature representation of each observation is obtained using a trainable deep convolutional network (LeCun et al., 1995) (See Appendix B for architecture details). Figure 2 shows examples of the agent observation and the corresponding Likelihood Map computed in both 2D and 3D environments. The simulation environments are described in detail in Section 5.1.
Policy Model The policy model gives the probablity of the next action given the current belief of the agent. It is trained using reinforcement learning, specifically Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016) algorithm (See Appendix A for a brief background on reinforcement learning). The belief map is stacked with the map design matrix and passed through 2 convolutional layers followed by a fully-connected layer to predict the policy as well as the value function. The policy and value losses are computed using the rewards observed by the agent and backpropagated through the entire model (See Appendix B for architecture details).
5 EXPERIMENTS
As described in Section 4, agent’s state, yt is a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O], where M ×N is the map size, and O be the number of possible orientations of the agent. We use a variety of domains for our experiments. The values of M and N vary accross domains but O = 4 is fixed. The possible actions in all domains are ‘move forward’, ‘turn left’ and ‘turn right’. The turn angle is fixed at (360/O = 90). This ensures that the agent is always in one of the 4 orientations, North, South, East and West. Note that although we use, O = 4 in all our experiments, our method is defined for any value of O. At each time step, the agent receives an intermediate reward equal to the maximum probability of being in any state, rt = maxyt(Bel(yt)). This encourages the agent the reduce the entropy of the Belief Map in order to localize as fast as possible. We observed that the agent converges to similar performance without introducing the intermediate reward, but it helps in speeding up training. At the end of the episode, the prediction is the state with the highest probability in the Belief Map. If the prediction is correct, i.e. y∗ = arg maxyt Bel(yt) where y∗ is the true state of the agent, then the agent receives a positive reward of 1. Please refer to Appendix B for more details about training and hyper-parameters. The metric for evaluation is accuracy (Acc) which refers to the ratio of the episodes where the agent’s prediction was correct over 1000 episodes. We also report the total runtime of the method in seconds taken to evaluate 1000 episodes.
5.1 SIMULATION ENVIRONMENTS
Maze 2D In the Maze2D environment, maps are represented by a binary matrix, where 0 denotes an obstacle and 1 denotes free space. The map designs are generated randomly using Kruskal’s algorithm Kruskal (1956). The agent’s observation in 2D environments is the series of pixels in front of the agent. For a map size of M ×N , the agent’s observation is an array of size max(M,N) containing pixels values in front of the agent. The view of the agent is obscured by obstacles, so all pixel values behind the first obstacle are treated as 0. The information about the map, M , received by the agent is the matrix representing the map design. Note that the observation at any state in the map can be computed using the map design. The top row in Figure 2 shows examples of map design and agent’s observation in this environment.
The 2D environments provide ideal conditions for Bayesian filtering due to lack of observation or motion noise. The experiments in the 2D environments are designed to evaluate and quantify the effectiveness of the policy learning model in ideal conditions. The size of the 2D environments can also be varied to test the scalability of the policy model. This design is similar to previous experimental settings such as by Tamar et al. (2016) and Karkus et al. (2017) for learning a target-driven navigation policy in grid-based 2D environments.
3D Environments In the 3D environments, the observation is an RGB image of the first-person view of the world as seen by the agent. The x-y coordinates of the agent are continuous variables,
unlike the discrete grid-based coordinates in the 2D environments. The matrix denoting the belief of the agent is discretized meaning each pixel in the Belief map corresponds to a range of states in the environment. At the start of every epsiode, the agent is spawned at a random location in this continuous range as opposed to a discrete location corresponding to a pixel in the belief map for 2D environments. This makes localization much more challenging than the 2D envrionments. Apart from the map design, the agent also receives a set of images of the visuals seen by the agent at a few locations uniformly placed around the map in all 4 orientations. These images, called memory images, are required by the agent for global localization. They are not very expensive to obtain in real-world environments. We use two types of 3D environments:
Maze3D: Maze3D consists of virtual 3D maps built using the Doom Game Engine. We use the ViZDoom API (Kempka et al., 2016) to interact with the gane engine. The map designs are generated using Kruskal’s algorithm and Doom maps are created based on these map designs using Action Code Scripts3. The design of the map is identical to the Maze2D map designs with the difference that the paths are much thicker than the walls as shown in Figure 2. The texture of each wall, floor and ceiling can be controlled which allows us to create maps with different number of ‘landmarks’. Landmarks are defined to be walls with a unique texture.
Unreal3D: Unreal3D is a photo-realistic simulation environment built using the Unreal Game Engine. We use the AirSim API (Shah et al., 2017) to interact with the game engine. The environment consists of a modern office space as shown in Figure 2 obtained from the Unreal Engine Marketplace4.
The 3D environments are designed to test the ability of the proposed model to jointly learn the perceptual model along with the policy model as the agent needs to handle raw pixel based input while learning a policy. The Doom environment provides a way to test the model in challenging ambiguous environments by controlling the number of landmarks in the environment, while the Unreal Environment allows us to evaluate the effectiveness of the model in comparatively more realistic settings.
5.2 BASELINES
Markov Localization (Fox, 1998) is a passive probabilistic approach based on Bayesian filtering. We use a geometric variant of Markov localization where the state space is represented by fine-grained, regularly spaced grid, called position probability grids (Burgard et al., 1996), similar to the state space in the proposed model. Grid-based state space representations is known to offer several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). In the passive localization approaches actions taken by the agent are random.
Active Markov Localization (AML) (Fox et al., 1998) is the active variant of Markov Localization where the actions taken by the agent are chosen to maximize the ratio of the ‘utility’ of the action to the ‘cost’ of the action. The ‘utility’ of an action a at time t is defined as the expected reduction in the uncertainity of the agent state after taking the action a at time t and making the next observation
3 https://en.wikipedia.org/wiki/Action_Code_Script 4 https://www.unrealengine.com/marketplace/small-office-prop-pack
at time t + 1: Ut(a) = H(yt) − Ea[H(yt+1)], where H(y) denotes the entropy of the belief: H(y) = − ∑ y Bel(y) logBel(y), and Ea[H(yt+1)] denotes the expected entropy of the agent after taking the action a and observing yt+1. The ‘cost’ of an action refers to the time needed to perform the action. In our environment, each action takes a single time step, thus the cost is constant.
We define a generalized version of the AML algorithm. The utility can be maximized over a sequence of actions rather than just a single action. Let a∗ ∈ Anl be the action sequence of length nl that maximizes the utility at time t, a∗ = arg maxa Ut(a) (where A denotes the action space). After computing a∗, the agent need not take all the actions in a∗ before maximizing the utility again. This is because new observations made while taking the actions in a∗ might affect the utility of remaining actions. Let ng ∈ {1, 2, ..., nl} be the number of actions taken by the agent, denoting the greediness of the algorithm. Due to the high computational complexity of calculating utility, the agent performs random action until belief is concentrated on nm states (ignoring beliefs under a certain threshold). The complexity of the generalized AML is O(nm(nl − ng)|A|nl). Given sufficient computational power, the optimal sequence of actions can be calculated with nl equal to the length of the episode, ng = 1, and nm equal to the size of the state space.
In the original AML algorithm, the utility was maximized over single actions, i.e. nl = 1 which also makes ng = 1. The value of nm used in their experiments is not reported, however they show an example with nm = 6. We run AML with all possible combination of values of nl ∈ {1, 5, 10, 15}, ng ∈ {1, nl} and nm = {5, 10} and define two versions: (1) Active Markov Localization (Fast): Generalized AML algorithm using the values of nl, ng, nm that maximize the performance while keeping the runtime comparable to ANL, and (2) Active Markov Localization (Slow): Generalized AML algorithm using the values of nl, ng, nm which maximize the performance while keeping the runtime for 1000 episodes below 24hrs (which is the training time of the proposed model) in each environment (See Appendix B for more details on the implementation of AML).
The perceptual model for both Markov Localization and Active Markov Localization needs to be specified separately. For the 2D environments, the perceptual model uses 1-hot vector representation of depth. For the 3D Environments, the perceptual model uses a pretrained Resnet-18 (He et al., 2016) model to calculate the feature representations for the agent observations and the memory images.
5.3 RESULTS
2D Environments For the Maze2D environment, we run all models on mazes having size 7× 7, 15× 15 and 21× 21 with varying episode lengths. We train all the models on randomly generated mazes and test on a fixed set of 1000 mazes (different from the mazes used in training). The results on the Maze2D environment are shown in Table 1. As seen in the table, the proposed model, Active Neural Localization, outperforms all the baselines on an average. The proposed method achieves a higher overall accuracy than AML (Slow) while being 100 times faster. Note that the runtime of AML for 1000 episodes is comparable to the total training time of the proposed model. The long runtime of AML (Slow) makes it infeasible to be used in real-time in many cases. When AML has comparable runtime, its performance drops by about 37% (AML (Fast)). We also observe that the difference in the performance of ANL and baselines is higher for smaller episode lengths. This indicates that ANL is more efficient (meaning it requires fewer actions to localize) in addition to being more accurate.
3D Environments All the mazes in the Maze3D environment are of size 70×70 while the office environment environment is of size 70×50. The agent location is a continuous value in this range. Each cell roughly corresponds to an area of 40cm×40cm in the real world. The set of memory images correspond to only about 6% of the total states. Likelihood of rest of the states are obtained by bilinear smoothing. All episodes have a fixed length of 30 actions. Although the size of the Office Map is 70×50, we represent Likelihood and Belief by a 70×70 in order to transfer the model between
Active Neural Localization
Time 297 300 300 300 300 301 2750 2699 905.9 2756 Acc 0.889 0.859 0.852 0.858 0.839 0.871 0.934 0.505 0.826 0.921
both the 3D environments for domain adaptation. We also add a Gaussian noise of 5% standard deviation to all translations in 3D environments.
In the Maze3D environment, we vary the difficulty of the environment by varying the number of landmarks in the environment. Landmarks are defined to be walls with a unique texture. Each landmark is present only on a single wall in a single cell in the maze grid. All the other walls have a common texture making the map very ambiguous. We expect landmarks to make localization easier as the likelihood maps should have a lower entropy when the agent visits a landmark, which consequently should reduce the entropy of the Belief Map. We run experiments with 10, 5 and 0 landmarks. The textures of the landmarks are randomized during training. This technique of domain randomization has shown to be effective in generalizing to unknown maps within the simulation environment (Lample & Chaplot, 2016) and transferring from simulation to real-world (Tobin et al., 2017). In each experiment, the agent is trained on a set of 40 mazes and evaluated in two settings: (1) Unseen mazes with seen textures: the textures of each wall in the test set mazes have been seen in the training set, however the map design of the test set mazes are unseen and (2) Unseen mazes with unseen textures: both the textures and the map design are unseen. We test on a set of 20 mazes for each evaluation setting. Figure 3 shows examples for both the settings.
In the Unreal3D environment, we test the effectiveness of the model in adapting to dynamic lightning changes. We modified the the Office environment using the Unreal Game Engine Editor to create two scenarios: (1) Lights: where all the office lights are switched on; (2) NoLights: where all the office lights are switched off. Figure 3 shows sample agent observations with and without lights at the same locations. To test the model’s ability to adapt to dynamic lighting changes, we train the model on the Office map with lights and test it on same map without lights. The memory images provided to the agent are taken while lights are switched on. Note that this is different as compared to the experiments on unseen mazes in Maze3D environment, where the agent is given memory images of the unseen environments.
The results for the 3D environments are shown in Table 2 and an example of the policy execution is shown in Figure 45. The proposed model significantly outperforms all baseline models in all evaluation settings with the lowest runtime. We see similar trends of runtime and accuracy trade-off between the two version of AML as seen in the 2D results. The absolute performance of AML (Slow) is rather poor in the 3D environments as compared to Maze2D. This is likely due to the decrease in value of look-ahead parameter, nl, to 3 and the increase in value of the greediness hyper-parameter, ng to 3, as compared to nl = 5, ng = 1 in Maze 2D, in order to ensure runtimes under 24hrs.
The ANL model performs better on the realistic Unreal environment as compared to Maze3D environment, as most scenes in the Unreal environment consists of unique landmarks while Maze3D environment consists of random mazes with same texture except those of the landmarks. In the Maze3D environment, the model is able to generalize well to not only unseen map design but also to unseen textures. However, the model doesn’t generalize well to dynamic lighting changes in the Unreal3D environment. This highlights a current limitation of RGB image-based localization approaches as compared to depth-based approaches, as depth sensors are invariant to lighting changes.
5 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization
Domain Adaptation We also test the ability of the proposed model to adapt between different simulation environments. The model trained on the Maze3D is directly tested on the Unreal3D Office Map without any fine-tuning. The results in Table 2 show that the model is able to generalize well to Unreal environment from the Doom Environment. We believe that the policy model generalizes well because the representation of belief and map design is common in all environments and policy model is based only on the belief and the map design, while the perceptual model generalizes well because it has learned to measure the similarity of the current image with the memory images as it was trained on environments with random textures. This property is similar to siamese networks used for one-shot image recognition (Koch et al., 2015).
6 CONCLUSION
In this paper, we proposed a fully-differentiable model for active global localization which uses structured components for Bayes filter-like belief propagation and learns a policy based on the belief to localize accurately and efficiently. This allows the policy and observation models to be trained jointly using reinforcement learning. We showed the effectiveness of the proposed model on a variety of challenging 2D and 3D environments including a realistic map in the Unreal environment. The results show that our model consistently outperforms the baseline models while being order of magnitudes faster. We also show that a model trained on random textures in the Doom simulation environment is able to generalize to photo-realistic Office map in the Unreal simulation environment. While this gives us hope that model can potentially be transferred to real-world environments, we leave that for future work. The limitation of the model to adapt to dynamic lightning can potentially be tackled by training the model with dynamic lightning in random mazes in the Doom environment. There can be several extensions to the proposed model too. The model can be combined with Neural Map (Parisotto & Salakhutdinov, 2017) to train an end-to-end model for a SLAM-type system and the architecture can also be utilized for end-to-end planning under uncertainity.
ACKNOWLEDGEMENTS
This work was supported by Apple, IARPA DIVA award D17PC00340, and ONR award N000141512791. The authors would also like to thank Jian Zhang for helping with the Unreal environment and the NVidia for donating a DGX-1 deep learning machine and providing GPU support.
A BACKGROUND: REINFORCEMENT LEARNING
In the standard Reinforcement Learning Sutton & Barto (1998) setting, at each time step t, an agent receives a observation, st, from the environment, performs an action at and receives a reward rt. The goal is to learn a policy π(a|s) which maximizes the expected return or the sum of discounted rewards Rt = ΣTt′=tγ
t′−trt′ , where T is the time at which the episode terminates, and γ ∈ [0, 1] is a discount factor that determines the importance of future rewards.
Reinforcement learning methods can broadly be divided into value-based methods and policy-based methods. Policy-based methods parametrize the policy function which can be optimized directly to maximize the expected return (E[Rt]) Sutton et al. (2000). While policy-based methods suffer from high variance, value-based methods typically use temporal difference learning which provides low variance estimates of the expected return. Actor-Critic methods (Barto et al., 1983; Sutton, 1984; Konda & Tsitsiklis, 1999) combine the benefits of both value-based methods by estimating both the value function, V π(st; θv), as well as the policy function π(at|st; θ)(Grondman et al., 2012). REINFORCE family of algorithms (Williams, 1992) are popular for optimizing the policy function, which updates the policy parameters θ in the direction of ∇θ log π(at|st; θ)Rt. Since this update is an unbiased estimate of∇θE[Rt], its variance can be reduced by subtracting a baseline function, bt(st) from the expected return (∇θ log π(at|st; θ)(Rt − bt(st))). When the estimate of the value function (V π(st)) is used as the baseline, the resultant algorithm is called Advantage Actor-Critic, as the resultant policy gradient is scaled by the estimate of the advantage of the action at in state st, A(at, st) = Q(at, st)− V (st). The Asynchronous Advantage Actor-Critic algorithm (Mnih et al., 2016) uses a deep neural network to parametrize the policy and value functions and runs multiple parallel threads to update the network parameters.
In this paper, we use the A3C algorithm for all our experiments. We also use entropy regularization for improved exploration as described by (Mnih et al., 2016). In addition, we use the Generalized Advantage Estimator (Schulman et al., 2015) to reduce the variance of the policy gradient updates.
B IMPLEMENTATION DETAILS
B.1 MODEL ARCHITECTURE DETAILS
The perceptual model for the 3D Environments receives RGB images of size 108x60. It consists of 2 Convolutional Layers. The first convolutional layer contains 32 filters of size 8x8 and stride of 4. The second convolutional layer contains 64 filters of size 4x4 with a stride of 2. The convolutional layers are followed by a fully-connected layer of size 512. The output of this fully-connected layer is used as the representation of the image while constructing the likelihood map. Figure 5 shows the architecture of the perceptual model in 3D environments. This architecture is adapted from previous work which is shown to perform well at playing deathmatches in Doom (Chaplot & Lample, 2017).
The policy model consists of two convolutional layers too. For the 2D environments, both the convolutional layers contain 16 filters of size 3 with stride of 1. For the 3D environments, the first convolutional layer contains 16 filters of size 7x7 with a stride of 3 and the second convolutional layer contains 16 filters of size 3x3 with a stride of 1. The convolutional layers are followed by a fully-connected layer of size 256. Figure 6 shows the architecture of the policy model in 3D environments.
We add action histroy of length 5 (last 5 actions) as well as the current time step as input to the policy model. We observed that action history input avoids the agent being stuck in alternating ‘turn left’ and ‘turn right’ actions whereas time step helps in accurately predicting the value function as the episode lengths are fixed in each environment. Each action in the action history as well as the current timestep are passed through an Embedding Layer to get an embedding of size 8. The embeddings of all actions and the time step are contacted with the 256-dimensional output of the fully-connected layer. The resultant vector is passed through two branches of single fully-connected layers to get the policy (actor layer with 3 outputs) and the value function (critic layer with 1 output).
B.2 HYPER-PARAMETERS AND TRAINING DETAILS
All the models are trained with A3C using Stochastic Gradient Descent with a learning rate of 0.001. We use 8 threads for 2D experiments and 4 threads for 3D experiments. Each thread performed an A3C update after 20 steps. The weight for entropy regularization was 0.01. The discount factor (γ) for reinforcement learning was chosen to be 0.99. The gradients were clipped at 40. All models are trained for 24hrs of wall clock time. All the 2D experiments (including evaluation runtime benchmarks for baselines) were run on Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz and all the 3D experiments were run on Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz. While all the A3C training threads ran on CPUs, the Unreal engine also utilized a NVidia GeForce GTX 1080 GPU. The model with the best performance on the training environment is used for evaluation.
B.3 TRANSITION FUNCTION
The transition function transforms the belief according to the action taken by the agent. For turn actions, the beliefs maps in each orientation are swapped according to the direction of the turn. For the move forward action, all probability values move one cell in the orientation of the agent, except those which are blocked by a wall (indicating a collision). Figure 7 shows sample outputs of the transition function given previous belief and action taken by the agent.
B.4 IMPLEMENTATION DETAILS OF ACTIVE MARKOV LOCALIZATION
In order to make our implementation of generalized AML as efficient as possible, we employ various techniques described by the authors, such as Pre-computation and Selective computation (Fox et al., 1998), along with other techniques such as hashing of expected entropies for action subsequences. The restrictions in runtime led to nl = 1, ng = 1, nm = 5 in both 2D and 3D environments for AML (Fast), nl = 5, ng = 1, nm = 10 in 2D environments for AML (Slow) and nl = 3, ng = 3, nm = 10 in the 3D environments for AML (Slow).
The computation of expected entropies require the expected observation in the future states while rolling out a sequence of action. While it is possible to calculate these in 2D environments with depthbased observations, it is not possible to do this in 3D environments with RGB image observations. However, for comparison purposes we assume that AML has a perfect model of the environment and provide future observations by rolling out the action sequences in the simulation environment. | 1. What is the main contribution of the paper in terms of localization on a known map?
2. How does the proposed approach utilize a belief network and RL to achieve its goals?
3. Can you provide more details about the environments used for evaluation, such as the grid world, textured 3D mazes, and photorealistic environment?
4. How effective is the transfer learning from the simulated Doom environment to the photorealistic Unreal environment?
5. What are the strengths and weaknesses of the proposed method compared to traditional methods like Bayes filters? | Review | Review
I have evaluated this paper for NIPS 2017 and gave it an "accept" rating at the time, but the paper was ultimately not accepted. This resubmission has been massively improved and definitely deserves to be published at ICLR.
This paper formulates the problem localisation on a known map using a belief network as an RL problem. The goal of the agent is to minimise the number of steps to localise itself (the agent needs to move around to accumulate evidence about its position), which corresponds to reducing the entropy of the joint distribution over a discretized grid over theta (4 orientations), x and y. The model is evaluated on a grid world, on textured 3D mazes with simplified motion (Doom environment) and on a photorealistic environment using the Unreal engine. Optimisation is done through A3C RL. Transfer from the crude simulated Doom environment to the photorealistic Unreal environment is achieved.
The belief network consists of an observation model, a motion prediction model that allows for translations along x or y and 90deg rotation, and an observation correction model that either perceives the depth in front of the agent (a bold and ambiguous choice) and matches it to the 2D map, or perceives the image in front of the agent. The map is part of the observation.
The algorithm outperforms Bayes filters for localisation in 2D and 3D and the idea of applying RL to minimise the entropy of position estimation is brilliant. Minor note: I am surprised that the cognitive map reference (Gupta et al, 2017) was dropped, as it seemed relevant. |
ICLR | Title
Active Neural Localization
Abstract
Localization is the problem of estimating the location of an autonomous agent from an observation and a map of the environment. Traditional methods of localization, which filter the belief based on the observations, are sub-optimal in the number of steps required, as they do not decide the actions taken by the agent. We propose “Active Neural Localizer”, a fully differentiable neural network that learns to localize accurately and efficiently. The proposed model incorporates ideas of traditional filtering-based localization methods, by using a structured belief of the state with multiplicative interactions to propagate belief, and combines it with a policy model to localize accurately while minimizing the number of steps required for localization. Active Neural Localizer is trained end-to-end with reinforcement learning. We use a variety of simulation environments for our experiments which include random 2D mazes, random mazes in the Doom game engine and a photo-realistic environment in the Unreal game engine. The results on the 2D environments show the effectiveness of the learned policy in an idealistic setting while results on the 3D environments demonstrate the model’s capability of learning the policy and perceptual model jointly from raw-pixel based RGB observations. We also show that a model trained on random textures in the Doom environment generalizes well to a photo-realistic office space environment in the Unreal engine.
1 INTRODUCTION
Localization is the problem of estimating the position of an autonomous agent given a map of the environment and agent observations. The ability to localize under uncertainity is required by autonomous agents to perform various downstream tasks such as planning, exploration and targetnavigation. Localization is considered as one of the most fundamental problems in mobile robotics (Cox & Wilfong, 1990; Borenstein et al., 1996). Localization is useful in many real-world applications such as autonomous vehicles, factory robots and delivery drones.
In this paper we tackle the global localization problem where the initial position of the agent is unknown. Despite the long history of research, global localization is still an open problem, and there are not many methods developed which can be learnt from data in an end-to-end manner, instead typically requiring significant hand-tuning and feature selection by domain experts. Another limitation of majority of localization approaches till date is that they are passive, meaning that they passively estimate the position of the agent from the stream of incoming observations, and do not have the ability to decide the actions taken by the agent. The ability to decide the actions can result in faster as well as more accurate localization as the agent can learn to navigate quickly to unambiguous locations in the environment.
We propose “Active Neural Localizer”, a neural network model capable of active localization using raw pixel-based observations and a map of the environment12. Based on the Bayesian filtering algorithm for localization (Fox et al., 2003), the proposed model contains a perceptual model to estimate the likelihood of the agent’s observations, a structured component for representing the belief, multiplicative interactions to propagate the belief based on observations and a policy model over the current belief to localize accurately while minimizing the number of steps required for localization. The entire model is fully differentiable and trained using reinforcement learning, allowing the perceptual model and the policy model to be learnt simultaneously in an end-to-end fashion. A variety
1 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization 2 The code is available at https://github.com/devendrachaplot/Neural-Localization
of 2D and 3D simulation environments are used for testing the proposed model. We show that the Active Neural Localizer is capable of generalizing to not only unseen maps in the same domain but also across domains.
2 RELATED WORK
Localization has been an active field of research since more than two decades. In the context of mobile autonomous agents, Localization can be refer to two broad classes of problems: Local localization and Global localization. Local localization methods assume that the initial position of the agent is known and they aim to track the position as it moves. A large number of localization methods tackle only the problem of local localization. These include classical methods based on Kalman Filters (Kalman et al., 1960; Smith et al., 1990) geometry-based visual odometry methods (Nistér et al., 2006) and most recently, learning-based visual odometry methods which learn to predict motion between consecutive frames using recurrent convolutional neural networks (Clark et al., 2017b; Wang et al., 2017). Local localization techniques often make restrictive assumptions about the agent’s location. Kalman filters assume Gaussian distributed initial uncertainty, while the visual odometry-based methods only predict the relative motion between consecutive frames or with respect to the initial frame using camera images. Consequently, they are unable to tackle the global localization problem where the initial position of the agent is unknown. This also results in their inability to handle localization failures, which consequently leads to the requirement of constant human monitoring and interventation (Burgard et al., 1998).
Global localization is more challenging than the local localization problem and is also considered as the basic precondition for truly autonomous agents by Burgard et al. (1998). Among the methods for global localization, the proposed method is closest to Markov Localization (Fox, 1998). In contrast to local localization approaches, Markov Localization computes a probability distribution over all the possible locations in the environment. The probability distribution also known as the belief is represented using piecewise constant functions (or histograms) over the state space and propagated using the Markov assumption. Other methods for global localization include Multi-hypothesis Kalman filters (Cox & Leonard, 1994; Roumeliotis & Bekey, 2000) which use a mixture of Gaussians to represent the belief and Monte Carlo Localization (Thrun et al., 2001) which use a set of samples (or particles) to represent the belief.
All the above localization methods are passive, meaning that they aren’t capable of deciding the actions to localize more accurately and efficiently. There has been very little research on active localization approaches. Active Markov Localization (Fox et al., 1998) is the active variant of Markov Localization where the agent chooses actions greedily to maximize the reduction in the entropy of the belief. Jensfelt & Kristensen (2001) presented the active variant of Multi-hypothesis Kalman filters where actions are chosen to optimise the information gathering for localization. Both of these methods do not learn from data and have very high computational complexity. In contrast, we demonstrate that the proposed method is several order of magnitudes faster while being more accurate and is capable of learning from data and generalizing well to unseen environments.
Recent work has also made progress towards end-to-end localization using deep learning models. Mirowski et al. (2016) showed that a stacked LSTM can do reasonably well at self-localization. The model consisted of a deep convolutional network which took in at each time step state observations, reward, agent velocity and previous actions. To improve performance, the model also used several auxiliary objectives such as depth prediction and loop closure detection. The agent was successful at navigation tasks within complex 3D mazes. Additionally, the hidden states learned by the models were shown to be quite accurate at predicting agent position, even though the LSTM was not explicitly trained to do so. Other works have looked at doing end-to-end relocalization more explicitly. One such method, called PoseNet (Kendall et al., 2015), used a deep convolutional network to implicitly represent the scene, mapping a single monocular image to a 3D pose (position and orientation). This method is limited by the fact that it requires a new PoseNet trained on each scene since the map is represented implicitly by the convnet weights, and is unable to transfer to scenes not observed during training. An extension to PoseNet, called VidLoc (Clark et al., 2017a), utilized temporal information to make more accurate estimates of the poses by passing a Bidirectional LSTM over each monocular image in a sequence, enabling a trainable smoothing filter over the pose estimates. Both these methods lack a straightforward method to utilize past map data to do localization in a new environment. In contrast, we demonstrate our method is capable of generalizing to new maps that were not previously seen during training time.
3 BACKGROUND: BAYESIAN FILTERING
Bayesian filters (Fox et al., 2003) are used to probabilistically estimate a dynamic system’s state using observations from the environment and actions taken by the agent. Let yt be the random variable representing the state at time t. Let st be the observation received by the agent and at be the action taken by the agent at time step t. At any point in time, the probability distribution over yt conditioned over past observations s1:t−1 and actions a1:t−1 is called the belief, Bel(yt) = p(yt|s1:t−1, a1:t−1) The goal of Bayesian filtering is to estimate the belief sequentially. For the task of localization, yt represents the location of the agent, although in general it can represent the state of the any object(s) in the environment. Under the Markov assumption, the belief can be recursively computed using the following equations:
Bel(yt) = ∑ yt−1 p(yt|yt−1, at−1)Bel(yt−1), Bel(yt) = 1 Z Lik(st)Bel(yt),
where Lik(st) = p(st|yt) is the likelihood of observing st given the location of the agent is yt, and Z = ΣytLik(st)Bel(yt) is the normalization constant. The likelihood of the observation, Lik(st) is given by the perceptual model and p(yt|yt−1, at−1), i.e. the probability of landing in a state yt from yt−1 based on the action, at−1, taken by the agent is specified by a state transition function, ft. The belief at time t = 0, Bel(y0), also known as the prior, can be specified based on prior knowledge about the location of the agent. For global localization, prior belief is typically uniform over all possible locations of the agent as the agent position is completely unknown.
4 METHODS
4.1 PROBLEM FORMULATION
Let st be the observation received by the agent and at be the action taken by the agent at time step t. Let yt be a random variable denoting the state of the agent, that includes its x-coordinate, y-coordinate and orientation. In addition to agent’s past observations and actions, a localization algorithm requires some information about the map, such as the map design. Let the information about the map be denoted by M . In the problem of active localization, we have two goals: (1) Similar to the standard state estimation problem in the Bayesian filter framework, the goal is to estimate the belief, Bel(yt), or the probability distribution of yt conditioned over past observations and actions and the information about the map, Bel(yt) = p(yt|s1:t, a1:t−1,M), (2) We also aim to learn a policy π(at|Bel(yt)) for localizing accurately and efficiently.
4.2 PROPOSED MODEL
Representation of Belief and Likelihood Let yt be a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Let M × N be the map size, and O be the number of possible orientations of the agent. Then, Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O]. Belief is represented as an O ×M ×N tensor, where (i, j, k)th element denotes the belief of agent being in the corresponding state, Bel(yt = i, j, k). This kind of grid-based representation of belief is popular among localization methods as if offers several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). Let Lik(st) = p(st|yt) be the likelihood of observing st given the location of the agent is yt, The likelihood of an observation in a certain state is also represented by an O ×M ×N tensor, where (i, j, k)th element denotes the likelihood of the current observation, st given that the agent’s state is yt = i, j, k. We refer to these tensors as Belief Map and Likelihood Map in the rest of the paper.
Model Architecture The overall architecture of the proposed model, Active Neural Localizer (ANL), is shown in Figure 1. It has two main components: the perceptual model and the policy model. At each timestep t, the perceptual model takes in the agent’s observation, st and outputs the Likelihood Map Lik(st). The belief is propagated through time by taking an element-wise dot product with the Likelihood Map at each timestep. Let Bel(yt) be the Belief Map at time t before observing st. Then the belief, after observing st, denoted by Bel(yt), is calculated as follows:
Bel(yt) = 1
Z Bel(yt) Lik(st)
where denotes the Hadamard product, Z = ∑ yt Lik(st)Bel(yt) is the normalization constant.
The Belief Map, after observing st, is passed through the policy model to obtain the probability of taking any action, π(at|Bel(yt)). The agent takes an action at sampled from this policy. The Belief Map at time t+ 1 is calculated using the transition function (fT ), which updates the belief at each location according to the action taken by the agent, i.e. p(yt+1|yt, at). The transition function is similar to the egomotion model used by Gupta et al. (2017) for mapping and planning. For ‘turn left’ and ‘turn right’ actions, the transition function just swaps the belief in each orientation. For the the ‘move forward’ action, the belief values are shifted one unit according to the orientation. If the next unit is an obstacle, then the value doesn’t shift, indicating a collison (See Appendix B for more details).
4.3 MODEL COMPONENTS
Perceptual Model The perceptual model computes the feature representation from the agent’s observation and the states given in the map information. The likelihood of each state in the map information is calculated by taking the cosine similarity of the feature representation of the agent’s observation with the feature representation of the state. Cosine similarity is commonly used for computing the similarity of representations (Nair & Hinton, 2010; Huang et al., 2013) and has also been used in the context on localization (Chaplot et al., 2016). The benefits of using cosine similarity over dot-product have been highlighted by Chunjie et al. (2017).
In the 2D environments, the observation is used to compute a one-hot vector of the same dimension representing the depth which is used as the feature representation directly. This resultant Likelihood map has uniform non-zero probabilities for all locations having the observed depth and zero probabilities everywhere else. For the 3D environments, the feature representation of each observation is obtained using a trainable deep convolutional network (LeCun et al., 1995) (See Appendix B for architecture details). Figure 2 shows examples of the agent observation and the corresponding Likelihood Map computed in both 2D and 3D environments. The simulation environments are described in detail in Section 5.1.
Policy Model The policy model gives the probablity of the next action given the current belief of the agent. It is trained using reinforcement learning, specifically Asynchronous Advantage Actor-Critic (A3C) (Mnih et al., 2016) algorithm (See Appendix A for a brief background on reinforcement learning). The belief map is stacked with the map design matrix and passed through 2 convolutional layers followed by a fully-connected layer to predict the policy as well as the value function. The policy and value losses are computed using the rewards observed by the agent and backpropagated through the entire model (See Appendix B for architecture details).
5 EXPERIMENTS
As described in Section 4, agent’s state, yt is a tuple Ao, Ax, Ay where Ax,Ay and Ao denote agent’s x-coordinate, y-coordinate and orientation respectively. Ax ∈ [1,M ], Ay ∈ [1, N ] and Ao ∈ [1, O], where M ×N is the map size, and O be the number of possible orientations of the agent. We use a variety of domains for our experiments. The values of M and N vary accross domains but O = 4 is fixed. The possible actions in all domains are ‘move forward’, ‘turn left’ and ‘turn right’. The turn angle is fixed at (360/O = 90). This ensures that the agent is always in one of the 4 orientations, North, South, East and West. Note that although we use, O = 4 in all our experiments, our method is defined for any value of O. At each time step, the agent receives an intermediate reward equal to the maximum probability of being in any state, rt = maxyt(Bel(yt)). This encourages the agent the reduce the entropy of the Belief Map in order to localize as fast as possible. We observed that the agent converges to similar performance without introducing the intermediate reward, but it helps in speeding up training. At the end of the episode, the prediction is the state with the highest probability in the Belief Map. If the prediction is correct, i.e. y∗ = arg maxyt Bel(yt) where y∗ is the true state of the agent, then the agent receives a positive reward of 1. Please refer to Appendix B for more details about training and hyper-parameters. The metric for evaluation is accuracy (Acc) which refers to the ratio of the episodes where the agent’s prediction was correct over 1000 episodes. We also report the total runtime of the method in seconds taken to evaluate 1000 episodes.
5.1 SIMULATION ENVIRONMENTS
Maze 2D In the Maze2D environment, maps are represented by a binary matrix, where 0 denotes an obstacle and 1 denotes free space. The map designs are generated randomly using Kruskal’s algorithm Kruskal (1956). The agent’s observation in 2D environments is the series of pixels in front of the agent. For a map size of M ×N , the agent’s observation is an array of size max(M,N) containing pixels values in front of the agent. The view of the agent is obscured by obstacles, so all pixel values behind the first obstacle are treated as 0. The information about the map, M , received by the agent is the matrix representing the map design. Note that the observation at any state in the map can be computed using the map design. The top row in Figure 2 shows examples of map design and agent’s observation in this environment.
The 2D environments provide ideal conditions for Bayesian filtering due to lack of observation or motion noise. The experiments in the 2D environments are designed to evaluate and quantify the effectiveness of the policy learning model in ideal conditions. The size of the 2D environments can also be varied to test the scalability of the policy model. This design is similar to previous experimental settings such as by Tamar et al. (2016) and Karkus et al. (2017) for learning a target-driven navigation policy in grid-based 2D environments.
3D Environments In the 3D environments, the observation is an RGB image of the first-person view of the world as seen by the agent. The x-y coordinates of the agent are continuous variables,
unlike the discrete grid-based coordinates in the 2D environments. The matrix denoting the belief of the agent is discretized meaning each pixel in the Belief map corresponds to a range of states in the environment. At the start of every epsiode, the agent is spawned at a random location in this continuous range as opposed to a discrete location corresponding to a pixel in the belief map for 2D environments. This makes localization much more challenging than the 2D envrionments. Apart from the map design, the agent also receives a set of images of the visuals seen by the agent at a few locations uniformly placed around the map in all 4 orientations. These images, called memory images, are required by the agent for global localization. They are not very expensive to obtain in real-world environments. We use two types of 3D environments:
Maze3D: Maze3D consists of virtual 3D maps built using the Doom Game Engine. We use the ViZDoom API (Kempka et al., 2016) to interact with the gane engine. The map designs are generated using Kruskal’s algorithm and Doom maps are created based on these map designs using Action Code Scripts3. The design of the map is identical to the Maze2D map designs with the difference that the paths are much thicker than the walls as shown in Figure 2. The texture of each wall, floor and ceiling can be controlled which allows us to create maps with different number of ‘landmarks’. Landmarks are defined to be walls with a unique texture.
Unreal3D: Unreal3D is a photo-realistic simulation environment built using the Unreal Game Engine. We use the AirSim API (Shah et al., 2017) to interact with the game engine. The environment consists of a modern office space as shown in Figure 2 obtained from the Unreal Engine Marketplace4.
The 3D environments are designed to test the ability of the proposed model to jointly learn the perceptual model along with the policy model as the agent needs to handle raw pixel based input while learning a policy. The Doom environment provides a way to test the model in challenging ambiguous environments by controlling the number of landmarks in the environment, while the Unreal Environment allows us to evaluate the effectiveness of the model in comparatively more realistic settings.
5.2 BASELINES
Markov Localization (Fox, 1998) is a passive probabilistic approach based on Bayesian filtering. We use a geometric variant of Markov localization where the state space is represented by fine-grained, regularly spaced grid, called position probability grids (Burgard et al., 1996), similar to the state space in the proposed model. Grid-based state space representations is known to offer several advantages over topological representations (Burgard et al., 1996; Fox et al., 2003). In the passive localization approaches actions taken by the agent are random.
Active Markov Localization (AML) (Fox et al., 1998) is the active variant of Markov Localization where the actions taken by the agent are chosen to maximize the ratio of the ‘utility’ of the action to the ‘cost’ of the action. The ‘utility’ of an action a at time t is defined as the expected reduction in the uncertainity of the agent state after taking the action a at time t and making the next observation
3 https://en.wikipedia.org/wiki/Action_Code_Script 4 https://www.unrealengine.com/marketplace/small-office-prop-pack
at time t + 1: Ut(a) = H(yt) − Ea[H(yt+1)], where H(y) denotes the entropy of the belief: H(y) = − ∑ y Bel(y) logBel(y), and Ea[H(yt+1)] denotes the expected entropy of the agent after taking the action a and observing yt+1. The ‘cost’ of an action refers to the time needed to perform the action. In our environment, each action takes a single time step, thus the cost is constant.
We define a generalized version of the AML algorithm. The utility can be maximized over a sequence of actions rather than just a single action. Let a∗ ∈ Anl be the action sequence of length nl that maximizes the utility at time t, a∗ = arg maxa Ut(a) (where A denotes the action space). After computing a∗, the agent need not take all the actions in a∗ before maximizing the utility again. This is because new observations made while taking the actions in a∗ might affect the utility of remaining actions. Let ng ∈ {1, 2, ..., nl} be the number of actions taken by the agent, denoting the greediness of the algorithm. Due to the high computational complexity of calculating utility, the agent performs random action until belief is concentrated on nm states (ignoring beliefs under a certain threshold). The complexity of the generalized AML is O(nm(nl − ng)|A|nl). Given sufficient computational power, the optimal sequence of actions can be calculated with nl equal to the length of the episode, ng = 1, and nm equal to the size of the state space.
In the original AML algorithm, the utility was maximized over single actions, i.e. nl = 1 which also makes ng = 1. The value of nm used in their experiments is not reported, however they show an example with nm = 6. We run AML with all possible combination of values of nl ∈ {1, 5, 10, 15}, ng ∈ {1, nl} and nm = {5, 10} and define two versions: (1) Active Markov Localization (Fast): Generalized AML algorithm using the values of nl, ng, nm that maximize the performance while keeping the runtime comparable to ANL, and (2) Active Markov Localization (Slow): Generalized AML algorithm using the values of nl, ng, nm which maximize the performance while keeping the runtime for 1000 episodes below 24hrs (which is the training time of the proposed model) in each environment (See Appendix B for more details on the implementation of AML).
The perceptual model for both Markov Localization and Active Markov Localization needs to be specified separately. For the 2D environments, the perceptual model uses 1-hot vector representation of depth. For the 3D Environments, the perceptual model uses a pretrained Resnet-18 (He et al., 2016) model to calculate the feature representations for the agent observations and the memory images.
5.3 RESULTS
2D Environments For the Maze2D environment, we run all models on mazes having size 7× 7, 15× 15 and 21× 21 with varying episode lengths. We train all the models on randomly generated mazes and test on a fixed set of 1000 mazes (different from the mazes used in training). The results on the Maze2D environment are shown in Table 1. As seen in the table, the proposed model, Active Neural Localization, outperforms all the baselines on an average. The proposed method achieves a higher overall accuracy than AML (Slow) while being 100 times faster. Note that the runtime of AML for 1000 episodes is comparable to the total training time of the proposed model. The long runtime of AML (Slow) makes it infeasible to be used in real-time in many cases. When AML has comparable runtime, its performance drops by about 37% (AML (Fast)). We also observe that the difference in the performance of ANL and baselines is higher for smaller episode lengths. This indicates that ANL is more efficient (meaning it requires fewer actions to localize) in addition to being more accurate.
3D Environments All the mazes in the Maze3D environment are of size 70×70 while the office environment environment is of size 70×50. The agent location is a continuous value in this range. Each cell roughly corresponds to an area of 40cm×40cm in the real world. The set of memory images correspond to only about 6% of the total states. Likelihood of rest of the states are obtained by bilinear smoothing. All episodes have a fixed length of 30 actions. Although the size of the Office Map is 70×50, we represent Likelihood and Belief by a 70×70 in order to transfer the model between
Active Neural Localization
Time 297 300 300 300 300 301 2750 2699 905.9 2756 Acc 0.889 0.859 0.852 0.858 0.839 0.871 0.934 0.505 0.826 0.921
both the 3D environments for domain adaptation. We also add a Gaussian noise of 5% standard deviation to all translations in 3D environments.
In the Maze3D environment, we vary the difficulty of the environment by varying the number of landmarks in the environment. Landmarks are defined to be walls with a unique texture. Each landmark is present only on a single wall in a single cell in the maze grid. All the other walls have a common texture making the map very ambiguous. We expect landmarks to make localization easier as the likelihood maps should have a lower entropy when the agent visits a landmark, which consequently should reduce the entropy of the Belief Map. We run experiments with 10, 5 and 0 landmarks. The textures of the landmarks are randomized during training. This technique of domain randomization has shown to be effective in generalizing to unknown maps within the simulation environment (Lample & Chaplot, 2016) and transferring from simulation to real-world (Tobin et al., 2017). In each experiment, the agent is trained on a set of 40 mazes and evaluated in two settings: (1) Unseen mazes with seen textures: the textures of each wall in the test set mazes have been seen in the training set, however the map design of the test set mazes are unseen and (2) Unseen mazes with unseen textures: both the textures and the map design are unseen. We test on a set of 20 mazes for each evaluation setting. Figure 3 shows examples for both the settings.
In the Unreal3D environment, we test the effectiveness of the model in adapting to dynamic lightning changes. We modified the the Office environment using the Unreal Game Engine Editor to create two scenarios: (1) Lights: where all the office lights are switched on; (2) NoLights: where all the office lights are switched off. Figure 3 shows sample agent observations with and without lights at the same locations. To test the model’s ability to adapt to dynamic lighting changes, we train the model on the Office map with lights and test it on same map without lights. The memory images provided to the agent are taken while lights are switched on. Note that this is different as compared to the experiments on unseen mazes in Maze3D environment, where the agent is given memory images of the unseen environments.
The results for the 3D environments are shown in Table 2 and an example of the policy execution is shown in Figure 45. The proposed model significantly outperforms all baseline models in all evaluation settings with the lowest runtime. We see similar trends of runtime and accuracy trade-off between the two version of AML as seen in the 2D results. The absolute performance of AML (Slow) is rather poor in the 3D environments as compared to Maze2D. This is likely due to the decrease in value of look-ahead parameter, nl, to 3 and the increase in value of the greediness hyper-parameter, ng to 3, as compared to nl = 5, ng = 1 in Maze 2D, in order to ensure runtimes under 24hrs.
The ANL model performs better on the realistic Unreal environment as compared to Maze3D environment, as most scenes in the Unreal environment consists of unique landmarks while Maze3D environment consists of random mazes with same texture except those of the landmarks. In the Maze3D environment, the model is able to generalize well to not only unseen map design but also to unseen textures. However, the model doesn’t generalize well to dynamic lighting changes in the Unreal3D environment. This highlights a current limitation of RGB image-based localization approaches as compared to depth-based approaches, as depth sensors are invariant to lighting changes.
5 Demo videos: https://devendrachaplot.github.io/projects/Neural-Localization
Domain Adaptation We also test the ability of the proposed model to adapt between different simulation environments. The model trained on the Maze3D is directly tested on the Unreal3D Office Map without any fine-tuning. The results in Table 2 show that the model is able to generalize well to Unreal environment from the Doom Environment. We believe that the policy model generalizes well because the representation of belief and map design is common in all environments and policy model is based only on the belief and the map design, while the perceptual model generalizes well because it has learned to measure the similarity of the current image with the memory images as it was trained on environments with random textures. This property is similar to siamese networks used for one-shot image recognition (Koch et al., 2015).
6 CONCLUSION
In this paper, we proposed a fully-differentiable model for active global localization which uses structured components for Bayes filter-like belief propagation and learns a policy based on the belief to localize accurately and efficiently. This allows the policy and observation models to be trained jointly using reinforcement learning. We showed the effectiveness of the proposed model on a variety of challenging 2D and 3D environments including a realistic map in the Unreal environment. The results show that our model consistently outperforms the baseline models while being order of magnitudes faster. We also show that a model trained on random textures in the Doom simulation environment is able to generalize to photo-realistic Office map in the Unreal simulation environment. While this gives us hope that model can potentially be transferred to real-world environments, we leave that for future work. The limitation of the model to adapt to dynamic lightning can potentially be tackled by training the model with dynamic lightning in random mazes in the Doom environment. There can be several extensions to the proposed model too. The model can be combined with Neural Map (Parisotto & Salakhutdinov, 2017) to train an end-to-end model for a SLAM-type system and the architecture can also be utilized for end-to-end planning under uncertainity.
ACKNOWLEDGEMENTS
This work was supported by Apple, IARPA DIVA award D17PC00340, and ONR award N000141512791. The authors would also like to thank Jian Zhang for helping with the Unreal environment and the NVidia for donating a DGX-1 deep learning machine and providing GPU support.
A BACKGROUND: REINFORCEMENT LEARNING
In the standard Reinforcement Learning Sutton & Barto (1998) setting, at each time step t, an agent receives a observation, st, from the environment, performs an action at and receives a reward rt. The goal is to learn a policy π(a|s) which maximizes the expected return or the sum of discounted rewards Rt = ΣTt′=tγ
t′−trt′ , where T is the time at which the episode terminates, and γ ∈ [0, 1] is a discount factor that determines the importance of future rewards.
Reinforcement learning methods can broadly be divided into value-based methods and policy-based methods. Policy-based methods parametrize the policy function which can be optimized directly to maximize the expected return (E[Rt]) Sutton et al. (2000). While policy-based methods suffer from high variance, value-based methods typically use temporal difference learning which provides low variance estimates of the expected return. Actor-Critic methods (Barto et al., 1983; Sutton, 1984; Konda & Tsitsiklis, 1999) combine the benefits of both value-based methods by estimating both the value function, V π(st; θv), as well as the policy function π(at|st; θ)(Grondman et al., 2012). REINFORCE family of algorithms (Williams, 1992) are popular for optimizing the policy function, which updates the policy parameters θ in the direction of ∇θ log π(at|st; θ)Rt. Since this update is an unbiased estimate of∇θE[Rt], its variance can be reduced by subtracting a baseline function, bt(st) from the expected return (∇θ log π(at|st; θ)(Rt − bt(st))). When the estimate of the value function (V π(st)) is used as the baseline, the resultant algorithm is called Advantage Actor-Critic, as the resultant policy gradient is scaled by the estimate of the advantage of the action at in state st, A(at, st) = Q(at, st)− V (st). The Asynchronous Advantage Actor-Critic algorithm (Mnih et al., 2016) uses a deep neural network to parametrize the policy and value functions and runs multiple parallel threads to update the network parameters.
In this paper, we use the A3C algorithm for all our experiments. We also use entropy regularization for improved exploration as described by (Mnih et al., 2016). In addition, we use the Generalized Advantage Estimator (Schulman et al., 2015) to reduce the variance of the policy gradient updates.
B IMPLEMENTATION DETAILS
B.1 MODEL ARCHITECTURE DETAILS
The perceptual model for the 3D Environments receives RGB images of size 108x60. It consists of 2 Convolutional Layers. The first convolutional layer contains 32 filters of size 8x8 and stride of 4. The second convolutional layer contains 64 filters of size 4x4 with a stride of 2. The convolutional layers are followed by a fully-connected layer of size 512. The output of this fully-connected layer is used as the representation of the image while constructing the likelihood map. Figure 5 shows the architecture of the perceptual model in 3D environments. This architecture is adapted from previous work which is shown to perform well at playing deathmatches in Doom (Chaplot & Lample, 2017).
The policy model consists of two convolutional layers too. For the 2D environments, both the convolutional layers contain 16 filters of size 3 with stride of 1. For the 3D environments, the first convolutional layer contains 16 filters of size 7x7 with a stride of 3 and the second convolutional layer contains 16 filters of size 3x3 with a stride of 1. The convolutional layers are followed by a fully-connected layer of size 256. Figure 6 shows the architecture of the policy model in 3D environments.
We add action histroy of length 5 (last 5 actions) as well as the current time step as input to the policy model. We observed that action history input avoids the agent being stuck in alternating ‘turn left’ and ‘turn right’ actions whereas time step helps in accurately predicting the value function as the episode lengths are fixed in each environment. Each action in the action history as well as the current timestep are passed through an Embedding Layer to get an embedding of size 8. The embeddings of all actions and the time step are contacted with the 256-dimensional output of the fully-connected layer. The resultant vector is passed through two branches of single fully-connected layers to get the policy (actor layer with 3 outputs) and the value function (critic layer with 1 output).
B.2 HYPER-PARAMETERS AND TRAINING DETAILS
All the models are trained with A3C using Stochastic Gradient Descent with a learning rate of 0.001. We use 8 threads for 2D experiments and 4 threads for 3D experiments. Each thread performed an A3C update after 20 steps. The weight for entropy regularization was 0.01. The discount factor (γ) for reinforcement learning was chosen to be 0.99. The gradients were clipped at 40. All models are trained for 24hrs of wall clock time. All the 2D experiments (including evaluation runtime benchmarks for baselines) were run on Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz and all the 3D experiments were run on Intel(R) Core(TM) i7-6850K CPU @ 3.60GHz. While all the A3C training threads ran on CPUs, the Unreal engine also utilized a NVidia GeForce GTX 1080 GPU. The model with the best performance on the training environment is used for evaluation.
B.3 TRANSITION FUNCTION
The transition function transforms the belief according to the action taken by the agent. For turn actions, the beliefs maps in each orientation are swapped according to the direction of the turn. For the move forward action, all probability values move one cell in the orientation of the agent, except those which are blocked by a wall (indicating a collision). Figure 7 shows sample outputs of the transition function given previous belief and action taken by the agent.
B.4 IMPLEMENTATION DETAILS OF ACTIVE MARKOV LOCALIZATION
In order to make our implementation of generalized AML as efficient as possible, we employ various techniques described by the authors, such as Pre-computation and Selective computation (Fox et al., 1998), along with other techniques such as hashing of expected entropies for action subsequences. The restrictions in runtime led to nl = 1, ng = 1, nm = 5 in both 2D and 3D environments for AML (Fast), nl = 5, ng = 1, nm = 10 in 2D environments for AML (Slow) and nl = 3, ng = 3, nm = 10 in the 3D environments for AML (Slow).
The computation of expected entropies require the expected observation in the future states while rolling out a sequence of action. While it is possible to calculate these in 2D environments with depthbased observations, it is not possible to do this in 3D environments with RGB image observations. However, for comparison purposes we assume that AML has a perfect model of the environment and provide future observations by rolling out the action sequences in the simulation environment. | 1. What is the main contribution of the paper regarding parameterized networks for robotic action selection?
2. What are the strengths of the proposed approach, particularly in its design and effectiveness?
3. Do you have any concerns or questions about the objective function used in the model?
4. How does the reviewer assess the exploration mechanism during the RL phase?
5. What is the purpose of memory images in the proposed method?
6. Are there any issues with the experimental designs or results presented in the paper?
7. How does the reviewer evaluate the overall performance of the proposed method compared to prior works? | Review | Review
This is an interesting paper that builds a parameterized network to select actions for a robot in a simulated environment, with the objective of quickly reaching an internal belief state that is predictive of the true state. This is an interesting idea and it works much better than I would have expected.
In more careful examination it is clear that the authors have done a good job of designing a network that is partly pre-specified and partly free, in a way that makes the learning effective. In particular
- the transition model is known and fixed (in the way it is used in the belief update process)
- the belief state representation is known and fixed (in the way it is used to decide whether the agent should be rewarded)
- the reward function is known and fixed (as above)
- the mechanics of belief update
But we learn
- the observation model
- the control policy
I'm not sure that global localization is still an open problem with known models. Or, at least, it's not one of our worst.
Early work by Cassandra, Kurien, et al used POMDP models and solvers for active localization with known transition and observation models. It was computationally slow but effective.
Similarly, although the online speed of your learned method is much better than for active Markov localization, the offline training cost is dramatically higher; it's important to remember to be clear on this point.
It is not obvious to me that it is sensible to take the cosine similarity between the feature representation of the observation and the feature representation of the state to get the entry in the likelihood map. It would be good to make it clear this is the right measure.
How is exploration done during the RL phase? These domains are still not huge.
Please explain in more detail what the memory images are doing.
In general, the experiments seem to be well designed and well carried out, with several interesting extensions.
I have one more major concern: it is not the job of a localizer to arrive at a belief state with high probability mass on the true state---it is the job of a localizer to have an accurate approximation of the true posterior under the prior and observations. There are situations (in which, for example, the robot has gotten an unusual string of observations) in which it is correct for the robot to have more probability mass on a "wrong" state. Or, it seems that this model may earn rewards for learning to make its beliefs overconfident. It would be very interesting to see if you could find an objective that would actually cause the model to learn to compute the appropriate posterior.
In the end, I have trouble making a recommendation:
Con: I'm not convinced that an end-to-end approach to this problem is the best one
Pro: It's actually a nice idea that seems to have worked out well
Con: I remain concerned that the objective is not the right one
My rating would really be something like 6.5 if that were possible. |
ICLR | Title
Active Learning over Multiple Domains in Natural Language Tasks
Abstract
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt the target domain. We survey a wide variety of techniques in active learning (AL), domain shift detection (DS), and multi-domain sampling to examine this challenging setting for question answering and sentiment analysis. We ask (1) what family of methods are effective for this task? And, (2) what properties of selected examples and domains achieve strong results? Among 18 acquisition functions from 4 families of methods, we findHDivergence methods, and particularly our proposed variant DAL-E, yield effective results, averaging 2-3% improvements over the random baseline. We also show the importance of a diverse allocation of domains, as well as room-for-improvement of existing methods on both domain and example selection. Our findings yield the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks.
1 INTRODUCTION
New natural language problems, outside the watershed of core NLP, are often strictly limited by a dearth of labeled data. While unlabeled data is frequently available, it is not always from the same source as the target distribution. This is particularly prevalent for tasks characterized by (i) significant distribution shift over time, (ii) personalization for user subgroups, or (iii) different collection mediums (see examples in Section A).
A widely-used solution to this problem is to bootstrap a larger training set using active learning (AL): a method to decide which unlabeled training examples should be labeled on a fixed annotation budget (Cohn et al., 1996; Settles, 2012). However, most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data (Dor et al., 2020). This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning. In this realistic setting, there are multiple sources of data (i.e. domains) to consider. In this case, it’s unclear whether to optimize for homogeneity or heterogeneity of selected examples. Secondly, is it more effective to allocate an example budget per domain, or treat examples as a single unlabeled pool? Where active learning baselines traditionally select examples the model is least confident on (Settles, 2009), in this setting it could lead to distracting examples from very dissimilar distributions.
In this work we empirically examine four separate families of methods (uncertainty-based, HDivergence, reverse classification accuracy, and semantic similarity detection) over several question answering and sentiment analysis datasets, following (Lowell et al., 2019; Elsahar & Gallé, 2019b), to provide actionable insights to practitioners facing this challenging variant of active learning for natural language. We address the following questions:
1. What family of methods are effective for multi-domain active learning? 2. What properties of the example and domain selection yield strong results?
While previous work has investigated similar settings (Saha et al., 2011; Liu et al., 2015; Zhao et al., 2021; Kirsch et al., 2021) we contribute, to our knowledge, the first rigorous formalization and broad survey of methods within NLP. We find that many families of techniques for active learning and
domain shift detection fail to reliably beat random baselines in this challenging variant of active learning, but certain H-Divergence methods are consistently strong. Our analysis identifies stark dissimilarities of these methods’ example selection, and suggests domain diversity is an important factor in achieving strong results. These results may serve as a guide to practitioners facing this problem, suggesting particular methods that are generally effective and properties of strategies that increase performance.
2 RELATED WORK
Active Learning in NLP Lowell et al. (2019) shows how inconsistent active learning methods are in NLP, even under regular conditions. However, Dor et al. (2020); Siddhant & Lipton (2018) survey active learning methods in NLP and find notable gains over random baselines. Kouw & Loog (2019) survey domain adaptation without target labels, similar to our setting, but for non-language tasks. We reference more active learning techniques in Section 4.
Domain Shift Detection Elsahar & Gallé (2019b) attempt to predict accuracy drops due to domain shifts and Rabanser et al. (2018) surveys different domain shift detection methods. Arora et al. (2021) examine calibration and density estimation for textual OOD detection.
Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts, though mainly in image classification, with single source and target domains. Kirsch et al. (2021) finds that BALD, which is often considered the state of the art for unshifted domain settings, can get stuck on irrelevant source domain or junk data. Zhao et al. (2021) investigates label shift, proposing a combination of predicted class balanced subsampling and importance weighting. Saha et al. (2011), whose approach corrects joint distribution shift, relies on the covariate shift assumption. However, in practical settings, there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold.
Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering (Talmor & Berant, 2019; Fisch et al., 2019; Longpre et al., 2019; Kamath et al., 2020) and classification tasks (Ruder & Plank, 2018; Sheoran et al., 2020). Ruder & Plank (2017) show the benefits of both data similarity and diversity in transfer learning. Rücklé et al. (2020) find that sampling from a widevariety of source domains (data scale) outperforms sampling similar domains in question answering. He et al. (2021) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains, focusing on robustness across domains.
3 MULTI-DOMAIN ACTIVE LEARNING
Suppose we have multiple domains D1, D2, ..., Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS =
⋃ i 6=T Di.
Given:
• Target: Small samples of labeled data points (x, y) from the target domain. DtrainT , D dev T , D test T ∼ DT .2 • Source: A large sample of unlabeled points (x) from the source domains. DS =
⋃ i6=T Di
Task:
1. Choose n samples from DS to label. DchosenS ⊂ DS , |DchosenS | = n, selected by argmaxx∈DS Af (x) where Af is an acquisition function: a policy to select unlabeled examples from DS for labeling. 2. Train a model M on Dfinal−train, validating on DdevT . Dfinal−train = DtrainT ∪DchosenS
3. Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others. 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled, in-domain training data for active
learning scenarios.
For Step 1, the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples, then evaluated on DtestT to demonstrate Af ’s ability to choose relevant out-of-distribution training examples.
4 METHODS
We identify four families of methods relevant to active learning over multiple shifted domains. Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model; H-Divergence techniques train classifiers for domain shift detection; Semantic Similarity Detection finds data points similar to points in the target domain; and Reverse Classification Accuracy approximates the benefit of training on a dataset. A limitation of our work is we do not cover all method families, such as domain adaptation, just those we consider most applicable. We derive ∼18 active learning variants, comprising the most prevalent and effective from prior work, and novel extensions/variants of existing paradigms for the multi-domain active learning setting (see KNN, R̃CA and DAL-E).
Furthermore, we split the families into two acquisition strategies: Single Pool Strategy and Domain Budget Allocation. Single Pool Strategy, comprising the first three families of methods, treats all examples as coming from one single unlabeled pool. Domain Budget Allocation, consisting of Reverse Classification Accuracy methods, simply allocate an example budget for each domain.
We enumerate acquisition methods Af below. Each method produces a full ranking of examples in the source set DS . To rank examples, most acquisition methods train an acquisition model, MA, using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN, which split DtrainT into two equal segments, one for training MA and one for an internal model. Some methods have both ascending and descending orders of these rankings (denoted by ↑ and ↓ respectively, in the method abbreviations), to test whether similar or distant examples are preferred in a multi-domain setting.
Certain methods use vector representations of candidate examples. We benchmark with both taskagnostic and task-specific encoders. The task-agnostic embeddings are taken from the last layer’s CLS token in Reimers & Gurevych (2019)’s sentence encoder (Appendix for details). The task-specific embeddings are taken from the last layer’s CLS token in the trained model MA.
The motivation of the task-specific variant is that each example’s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “∗” in their abbreviation. Otherwise, they use task-agnostic vectors.
4.1 UNCERTAINTY METHODS
These methods measure the uncertainty of a trained model on a new example. Uncertainty can reflect either aleatoric uncertainty, due to ambiguity inherent in the example, or epistemic uncertainty, due to limitations of the model (Kendall & Gal, 2017). For the following methods, let Y be the set of all possible labels produced from the model M(x) and ly be the logit value for y ∈ Y .
Confidence (CONF) A model’s confidence P (y|x) in its prediction y estimates the difficulty or unfamiliarity of an example (Guo et al., 2017; Elsahar & Gallé, 2019a).
Entropy (ENTR) Entropy applies Shannon entropy (Shannon, 1948) to the full distribution of class probabilities for each example, formalized as AENTR.
ACONF(x,MA) = −max(P (y|x)) AENTR(x,MA) = − |Y |∑ i=1 P (yi|x) · logP (yi|x)
Energy-based Out-of-Distribution Detection (ENG) Liu et al. (2020) use an energy-based score to distinguish between in- and out-distribution examples. They demonstrate this method is less susceptible to overconfidence issues of softmax approaches.
3For instance, consider in one domain every example is prefixed with “Text:” while the other is not — telling the difference is trivial, but the examples could be near-identical with respect to the task.
Bayesian Active Learning by Disagreement (BALD) Gal & Ghahramani (2016) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes, each with a distinct dropout mask. BALD isolates epistemic uncertainty, as the model would theoretically produce stable predictions over inference passes given sufficient capacity. We conduct T = 20 forward passes on x. ŷt = argmaxiP (yi|x)t, representing the predicted class on the t-th model pass on x. Following (Lowell et al., 2019), ties are broken by taking the mean label entropy over all T runs.
AENG(x,MA) = − log ∑ y∈Y ely ABALD(x,MA) = 1− count(modet∈T (ŷt)) T
4.2 H-DIVERGENCE METHODS
Ben-David et al. (2006; 2010) formalize the divergence between two domains as theH-Divergence, which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning (DAL) applies this concept to the active learning setting (Gissin & Shalev-Shwartz, 2019).
We explore variants of DAL, using an XGBoost decision tree (Chen & Guestrin, 2016) as the discriminator model g.5 For the following methods, let Dtrain−BT be the 1k examples from D train T that were not used to train MA. Let E be an encoder function, which can be task-specific or agnostic as described above. We use samples both fromDtrain−BT andDS to train the discriminator. We assign samples origin labels l, which depend on the DAL variant. Samples from DS with discriminator predictions closest to 1 are selected for labeling. The acquisition scoring function for each DAL method and training set definition, respectively, are:
ADAL(x, g, E) = g(E(x)) {(E(x), l) | x ∈ Dtrain−BT ∪DS}
Discriminative Active Learning — Target (DAL-T) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T, l = 1Dtrain−BT (x).
Discriminative Active Learning — Error (DAL-E) DAL-E is a novel variant of DAL. DALE’s approach is to find examples that are similar to those in the target domain that MA misclassified. We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E, l = 1DerrT (x).
4.3 REVERSE CLASSIFICATION ACCURACY
RCA Reverse Classification Accuracy (RCA) estimates how effective source set Di,i∈S is as a training data for target test set DT (Fan & Davidson, 2006; Elsahar & Gallé, 2019b). Without gold labels for Di we compute soft labels instead, using the BERT-Base MA trained on the small labeled set DtrainT . We then train a child model Mi on Di using these soft labels, and evaluate the child model on DdevT . RCA chooses examples randomly from whichever domain i produced the highest score si.
ARCA = 1D(arg maxi∈S si)(x) R̃CA : τi = si
sT − si , |Dchoseni | = τi∑ j sj
RCA-Smoothed (R̃CA) Standard RCA only selects examples from one domain Di. We develop a novel variant which samples from multiple domains, proportional to their relative performance on the target domain DdevT . RCA-smoothed (R̃CA) selects |Dchoseni | examples from source domain i, based on the relative difference between the performance si (of child model Mi trained on domain i with pseudo-labels from MA) on the target domain, and the performance sT of a model trained directly on the target domain DdevT . Since these strategies directly estimates model performance on the target domain resulting from training on each source domain, RCA and R̃CA are strong Domain Budget Allocation candidates.
4.4 NEAREST NEIGHBOUR / SEMANTIC SIMILARITY DETECTION (KNN)
Nearest neighbour methods (KNN) are used to find examples that are semantically similar. Using sentence encoders we can search the source set DS to select the top k nearest examples by cosine similarity to the target set. We represent the target set as the mean embedding of DtrainT . For question answering, where an example contains two sentences (the query and context), we refer to KNN-Q where we only encode the query text, KNN-C where we only encode the context text, or KNN-QC where we encode both concatenated together. The acquisition scoring function per example, uses either a task-specific or task-agnostic encoder E:
AKNN(x,E) = CosSim(E(x),Mean(E(DtrainT ))
5 EXPERIMENTS
Experiments are conducted on two common NLP tasks: question answering (QA) and sentiment analysis (SA), each with several available domains.
Question Answering We employ 6 diverse QA datasets from the MRQA 2019 workshop (Fisch et al., 2019), shown in Table 1 (left).6 We sample 60k examples from each dataset for training, 5k for validation, and 5k for testing. Questions and contexts are collected with varying procedures and sources, representing a wide diversity of datasets.
Sentiment Analysis For the sentiment analysis classification task, we follow (Blitzer et al., 2007) and (Ruder & Plank, 2018) by randomly selecting 6 Amazon multi-domain review datasets, as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 7 Altogether, these datasets exhibit wide diversity based on review length and topic (see Table 1). We normalize all datasets to have 5 sentiment classes: very negative, negative, neutral, positive, and very positive. We sample 50k examples for training, 5k for validation, and 5k for testing.
Experimental Setup To evaluate methods for the multi-domain active learning task, we conduct the experiment described in Section 3 for each acquisition method, rotating each domain as the target set. Model M , a BERT-Base model (Devlin et al., 2019), is chosen via hyperparameter grid search over learning rate, number of epochs, and gradient accumulation. The large volume of experiments entailed by this search space limits our capacity to benchmark performance variability due to isolated factors (the acquisition method, the target domain, or fine-tuning final models). However, our hyperparameter search closely mimics the process of an ML practitioner looking to select a best method and model, so we believe our experiment design captures a fair comparison among methods. See Algorithm 1 in Appendix Section B for full details.
4The approximation is also referred to as Proxy A-Distance (PAD) from (Elsahar & Gallé, 2019b) 5Hyperparameter choices and training procedures are detailed in the Appendix. 6The workshop pre-processed all datasets into a similar format, for fully answerable, span-extraction QA: https://github.com/mrqa/MRQA-Shared-Task-2019. 7https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
6.1 COMPARING ACQUISITION METHODS
Results in Figure 1 show the experiments described in Section 5: benchmarking each acquisition method for multi-domain active learning. We observe for both question answering (QA) and sentiment analysis (SA), most methods manage to outperform the no-extra-labelled data baseline (0% at the y-axis) and very narrowly outperform the random selection baseline (red line). Consistent with prior work (Lowell et al., 2019), active learning strategies in NLP have brittle and inconsistent improvements over random selection. Our main empirical findings, described in this section, include:
• H-Divergence, and particularly DAL-E variants, consistently outperform baselines and other families of methods.
• The ordering of examples in Uncertainty methods depend significantly on the diversity in source domains. BALD variants perform best among available options.
• Task-agnostic representations, used in KNN or DAL variants, provide consistently strong results on average, but task-specific representations significantly benefit certain target sets.
• Different families of methods rely on orthogonal notions of relevance in producing their example rankings.
H-Divergence methods categorically achieved the highest and most reliable scores, both as a family and individual methods, represented in the top 3 individual methods 11 / 18 times for QA, and 20 / 24 times for SA. For QA, BALD↑ and DAL-E∗ had the best mean and median scores respectively, and for SA DAL-E achieved both the best mean and median scores. Among these methods, our proposed DAL-E variants routinely outperform DAL-T variants by a small margin on average, with equivalent training and tuning procedures. We believe this is because DAL-E captures both notions of domain similarity and uncertainty. By design it prioritizes examples that are similar to in-domain samples, but also avoids those which are uninformative, because the model already performs well on them.
Among Uncertainty methods, for SA methods which select for higher uncertainty vastly outperformed those which selected for low uncertainty. The opposite is true for QA. This suggests the
diversity of QA datasets contain more extreme (harmful) domain shift than the (mostly Amazon-based) SA datasets.8 In both settings, the right ordering of examples with BALD (epistemic uncertainty) achieves the best results in this family of methods, over the others, which rely on total uncertainty.
Among Reverse Classification Accuracy methods, our R̃CA variant also noticeably outperforms standard RCA and most other methods, aside from DAL and BALD. Combining R̃CA with an example ranking method is a promising direction for future work, given the performance it achieves selecting examples randomly as a Domain Budget Allocation strategy.
Lastly, the Semantic Similarity Detection set of methods only rarely or narrowly exceed random selection. Intuitively, task-agnostic representations (KNN) outperform KNN∗, given the task-agnostic sentence encoder was optimized for cosine similarity.
Embedding Ablations To see the effects of embedding space on KNN and DAL, we used both a task-specific and task-agnostic embedding space. While a task-specific embedding space reduces the examples to features relevant for the task, a task-agnostic embedding space produces generic notions of similarity, unbiased by the task model.
According to Figure 1, KNN outperforms KNN∗. In the QA setting, KNN∗’s median is below the random baseline’s. In both plots, KNN∗’s whiskers extend below 0, indicating that in some cases the method actually chooses source examples that are harmful to target domain performance.
For DAL methods, task-agnostic and task-specific embeddings demonstrated mostly similar median performances. Notably, the boxes and whiskers are typically longer for task-specific methods than task-agnostic methods. This variability indicates certain target datasets may benefit significantly from task-specific embeddings, though task-agnostic embeddings achieve more consistent results.
Comparing Example Rankings For each setting, we quantify how similar acquisition methods rank examples from DS . In Figure 2, for each pair of methods, we calculate the Kendall’s Tau coefficient between the source example rankings chosen for a target domain, then average this coefficient over the target domains. Kendall’s Tau gives a scores [−1, 1], with -1 meaning perfect anti-correlation, 0 meaning no correlation, and 1 meaning perfect correlation between the rankings. Methods from different families show close to no relationship, even if they achieve similar performances, suggesting each family relies on orthogonal notions of similarity to rank example relevance. This suggests there is potential for combining methods from different families for this task in future work.
In Sentiment tasks, all uncertainty methods had highly correlated examples. In QA, ENTR had little correlation with any method. This is likely due to the significantly larger output space for QA models. Compared to only 5 label classes in SA, question answering models distribute their start and end confidences over sequences of up to 512, where there can be multiple valid answer candidates. Embedding space also largely influences the examples that methods chose. DAL methods had higher correlations with each other when they share the same embedding space; i.e. DAL-E’s ranking has a higher correlation with DAL-T than with DAL-E∗.
6.2 PROPERTIES OF OPTIMAL EXAMPLE SELECTION
We examine three properties of optimally selected examples: (i) whether selecting from many diverse or one single domain leads to better performance, (ii) whether the selection of a domain or the individual examples matters more to performance, and (iii) whether selection strategies can benefit from source domain information rather than treating samples as drawn from a single pool? Our findings regarding properties of optimal selection, as described in this section, include:
• Selecting a diversity of domains usually outperforms selecting examples from a single domain.
• Acquisition functions such as DAL-E∗ do rely on example selection, mainly to avoid the possibility of large negative outcomes.
• Domain Budget Allocation during selection may improve performance. Surprisingly, even random selection from an “optimal” balance of domains beats our best performing acquisition methods most of the time.
8Accordingly, we attempt to derive a relationship between domain distance and method performance in Appendix F, but find intuitive calculations of domain distance uninterpretable.
Are Many Diverse or One Single Domain Preferable? To answer this question we conduct a full search over all combinations of source datasets. For each target set, we fix 2k in-domain data points and sample all combinations of other source sets in 2k increments, such that altogether there are 10k training data points. For each combination of source sets, we conduct a simple grid search, randomly sampling the source set examples each time, and select the best model, mimicking standard practice among practitioners.
The result is a comprehensive search of all combinations of source sets (in 2k increments) up to 10k training points, so we can rank all combinations of domains per target, by performance. Tables 2a and 2b show the optimal selections, even as discrete as 2k increments, typically select at least two or more domains to achieve the best performance. However, 1 of 6 targets for QA, or 2 of 8 for the SA tasks achieve better results selecting all examples from a single domain, suggesting this is a strong baseline, if the right source domain is isolated. We also report the mean score of all permutations to demonstrate the importance of selecting the right set of domains over a random combination.
Domains or Examples? Which is more important, to select the right domains or the right examples within some domain? From the above optimal search experiment we see selecting the right combination of domains regularly leads to strong improvements over a random combination of domains. Whether example selection is more important than domain selection may vary depending on the example variety within each domain. We narrow our focus to how much example selection plays a role for one of the stronger acquisition functions: DAL-E∗. We fix the effect of domain selection (the number of examples from each domain) but vary which examples are specifically selected. Using DAL-E∗’s distribution of domains, we compare the mean performance of models trained on it’s highest ranked examples against a random set of examples sampled from those same domains. We find a +0.46 ± 0.25% improvement for QA, and +0.12 ± 0.19% for SA. We also compare model performances trained on random selection against the lowest ranked examples by DAL-E∗. Interestingly, we see a -1.64 ± 0.37% performance decrease for QA, and -1.46 ± 0.56% decrease for sentiment tasks. These results suggest that example selection is an important factor beyond domain selection, especially for avoiding bad example selections.
Single Pool or Domain Budget Allocation Does using information about examples’ domains during selection lead to better results than treating all examples as coming from a single unlabeled pool? Originally, we hypothesized Single Pool Strategy methods would perform better on smaller budget sizes as they add the most informative data points regardless of domain. On the other hand, we thought that if the budget size is large, Domain Budget Allocation would perform best, as they choose source domains closest to the target domain. Based on Tables 1b and 1a, we were not able to draw conclusions about this hypothesis, as each sample size n = {8000, 18000, 28000} produced roughly similar winning methods. Future work should include a wider range of budget sizes with larger changes in method performance between sizes.
In our main set of experiments, the RCA acquisition functions follow the Domain Budget Allocation strategy, while all other acquisition functions follow the Single Pool strategies. Based on median performance, R̃CA outperformed all other methods (we’re including BALD here due to inconsistency in performance between QA and SA) except for those in theH-Divergence family. This suggests that using domain information during selection can lead to performance gains.
The Optimal Domain Search experiments, shown in Tables 2a and 2b, further suggest that allocating a budget from each domain can improve performance. For 8 out of our 14 experiments, selecting random samples according to the optimal domain distribution outperform any active learning strategy. While the optimal domain distributions were not computed a priori in our experiments, this result shows the potential for Domain Budget Allocation strategies. Future work could reasonably improve our results by developing an acquisition function that better predicts the optimal domain distributions than R̃CA, or to even have greater performance gains by budgeting each domain, then applying an active learning strategy (e.g. DAL-E) within each budget.
7 CONCLUSION
We examine a challenging variant of active learning where target data is scarce, and multiple shifted domains operate as the source set of unlabeled data. For practitioners facing multi-domain active learning, we benchmark 18 acquisition functions, demonstrating theH-Divergence family of methods and our proposed variant DAL-E achieve the best results. Our analysis shows the importance of example selection in existing methods, and also the surprising potential of domain budget allocation strategies. Combining families of methods, or trying domain adaptation techniques on top of selected example sets, offer promising directions for future work.
A MULTI-DOMAIN ACTIVE LEARNING TASK
In this section, we would enumerate real-world settings in which a practitioner would be interested in multi-domain active learning methods. We expect this active learning variant to be applicable to cold starts, rare classes, personalization, and settings where the modelers are constrained by privacy considerations, or a lack of labelers with domain expertise.
• In the cold start scenario, for a new NLP problem, there is often little to no target data available yet (labeled or unlabelled), but there are related sources of unlabelled data to try. Perhaps an engineer has collected small amounts of training data from an internal population. Because the data size is small, the engineer is considering out-of-domain samples, collected from user studies, repurposed from other projects, scraped from the web, etc..
• In the rare class scenario, take an example of a new platform/forum/social media company classifying hate speech against a certain minority group. Perhaps the prevalence of positive, in-domain samples on the social media platform is small, so an engineer uses out-domain samples from books, other social media platforms, or from combing the internet.
• In a personalization setting, like spam filtering or auto-completion on a keyboard, each user may only have a couple hundred of their own samples, but out-domain samples from other users may be available in greater quantities.
• In the privacy constrained setting, a company may collect data from internal users, user studies, and beta testers; however, a commitment to user privacy may incentivize the company to keep the amount of labeled data from the target user population low.
• Lastly, labeling in-domain data may require certain domain knowledge, which would lead to increased expenses and difficulty in finding annotators. As an example, take a text classification problem in a rare language. It may be easy to produce out-domain samples by labeling English text and machine translating it to the rare language, whereas generating in-domain labeled data would require annotators who are fluent in the rare language.
In each of these settings, target distribution data may not be amply available, but semi-similar unlabelled domains often are. This rules out many domain adaptation methods that rely heavily on unlabelled target data.
We were able to simulate the base conditions of this problem with sentiment analysis and question answering datasets, since they are rich in domain diversity. We believe these datasets are reasonable proxies to represent the base problem, and yield general-enough insights for a practitioner starting on this problem.
B REPRODUCIBILITY
B.1 DATASETS AND MODEL TRAINING
We choose question answering and sentiment analysis tasks as they are core NLP tasks, somewhat representative of many classification and information-seeking problems. Multi-domain active learning is not limited to any subset of NLP tasks, so we believe these datasets are a reasonable proxie for the problem.
For question answering, the MRQA shared task (Fisch et al., 2019) includes SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).
For the sentiment analysis classification task, we use Amazon datasets following (Blitzer et al., 2007) and (Ruder & Plank, 2018), as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 9 Both question answering and sentiment analysis datasets are described in Table 1.
For reproducibility, we share our hyper-parameter selection in Table 3. Hyper-parameters are taken from Longpre et al. (2019) for training all Question Answering (QA) models since their parameters
9https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
are tuned for the same datasets in the MRQA Shared Task. We found these choices to provide stable and strong results across all datasets. For sentiment analysis, we initially experimented on a small portion of the datasets to arrive at a strong set of base hyper-parameters to tune from.
Our BERT question answering modules build upon the standard PyTorch (Paszke et al., 2019) implementations from HuggingFace, and are trained on one NVIDIA Tesla V100 GPU.10.
B.2 EXPERIMENTAL DESIGN
For more detail regarding the experimental design we include Algorith 1, using notation described in the multi-domain active learning task definition.
Algorithm 1 EXPERIMENTAL DESIGN 1: for each Acquisition Function Af do 2: for each Target set DT ∼ D do 3: DtrainT , D dev T , D test T ∼ DT
4: DS := {x ∈ D | x /∈ DT } 5: MA ← TRAIN(DtrainT , DdevT ) 6: Dchosen ← [Rankx∈DSAf (x,MA)][: n] 7: Dfinal−train = DtrainT ∪Dchosen 8: M ← GRIDSEARCH(Dfinal−train, DdevT ) 9: (Af , DT ) = s Af T ←M(D test T )
10: end for 11: end for 12: return Scores Dictionary (Af , DT )→ s Af T
C ACQUISITION FUNCTIONS
C.1 TASK AGNOSTIC EMBEDDINGS
To compute the semantic similarity between two examples, we computed the example embeddings using the pre-trained model from a sentence-transformer (Reimers & Gurevych, 2019). We used the
10https://github.com/huggingface/transformers
RoBERTa large model, which has 24 layers, 1024 hidden layers, 16 heads, 355M parameters, and fine tuning on the SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and STSBenchmark (Cer et al., 2017) datasets. Its training procedure is documented in https://www.sbert.net/ examples/training/sts/README.html.
C.2 BAYESIAN ACTIVE LEARNING BY DISAGREEMENT (BALD)
We note that Siddant and Lipton’s presentation of BALD is more closely related to the Variation Ratios acquisition function described in Gal et al. (2017) than the description of dropout as a Bayesian approximation given in Gal & Ghahramani (2016). In particular, Gal et al. (2017) found that Variation Ratios performed on par or better than Houlsby’s BALD on MNIST but was less suitable for ISIC2016.
C.3 DISCRIMINATIVE ACTIVE LEARNING MODEL (DAL) TRAINING
DAL’s training set is created using the methods detailed in Section 4.2. The training set is then partitioned into five equally sized folds. In order to predict on data that is not used to train the discriminator, we use 5-fold cross validation. The model is trained on four folds, balancing the positive and negative classes using sample weights. The classifier then predicts on the single heldout fold. This process is repeated five times so that each example is in the held out fold exactly once. Custom model parameters are shown in Table 4; model parameters not shown in the table are the default XGBClassifier parameters in xgboost 1.0.2. The motivations for choice in model and architecture are the small amount of target domain examples requiring a simple model to prevent overfitting and the ability of decision trees to capture collective interactions between features.
D FULL METHOD PERFORMANCES
We provide a full breakdown of final method performances in Tables 5 and 6.
E KENDALL’S TAU
E.1 DEFINITION
Kendall’s Tau is a statistic that measures the rank correlation between two quantities. Let X and Y be random variables with (x1, y1), (x2, y2), ..., (xn, yn) as observations drawn from the joint distribution. Given a pair (xi, yi) and (xj , yj), where i 6= j, we have: yj−yi xj−xi > 0 : pair is concordant
yj−yi xj−xi < 0 : pair is discordant
yj−yi xj−xi = 0 : pair is a tie
Let nc be the number of concordant pairs and nd the number of discordant pairs. Let ties add 0.5 to the concordant and discordant pair counts each. Then, Kendall’s Tau is computed as:11
τ = nc−ndnc+nd
E.2 INTER-FAMILY COMPARISON
Here, we extend on our comparison of example rankings by presenting plots of Kendall Tau scores normalized by intra-family scores in 3. For the sentiment setting, the ranges of intra-family Kendall Tau coefficients are smaller than the MRQA setting. Methods in the uncertainty family have especially strong correlations with each other and much weaker with methods outside of the family. For Hdivergence based methods, intra-family correlations are not’t as strong as for the uncertainty family;
11https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/ kendell.htm
in fact, the Kendall Taus between DAL-E/KNN and DAL-T/KNN appear to be slightly within the H-divergence intra-family range.
Furthermore, intra-family ranges are quite large for all families in the MRQA setting. For each method, there is at least one other method from a different family with which it had a higher Kendall Tau coefficient than the least similar methods of its own family.
F RELATING DOMAIN DISTANCES TO PERFORMANCE
We investigated why certain methods work better than others. One hypothesis is that there exists a relationship between between target-source domain distances and method performance. We estimated the distance between two domains by computing the Wasserstein distance between random samples of 3k example embeddings from each domain. We experimented with two kinds of example embeddings: 1. A task agnostic embedding computed by the sentence transformer used in the KNN method, and 2. A task specific embedding computed by a model trained with the source domain used in the DAL∗ method. Given that there are k − 1 source domains for each target domain, we tried aggregating domain distances over its mean, minimum, maximum, and variance to see if Wasserstein domain distances could be indicative of relative performance across all methods.
Figure 4, Figure 5, Figure 6, and Figure 7 each show, for a subset of methods, the relationship between each domain distance aggregation and the final performance gap between the best performing method. Unfortunately, we found no consistent relationship for both MRQA and the sentiment classification tasks. We believe that this result arose either because our estimated domain distances were not reliable measures of domain relevance, or because the aggregated domain distances are not independently sufficient to discern relative performance differences across methods.
Figure 5: Minimum Wasserstein domain distance vs method performance.
Figure 6: Maximum Wasserstein domain distance vs method performance. | 1. What is the focus of the paper regarding combining data from multiple sources?
2. What are the strengths and weaknesses of the proposed approach compared to other techniques in active learning, domain shift detection, and multi-domain sampling?
3. Do you have any concerns regarding the experimental design and its applications in question answering and sentiment analysis?
4. How does the reviewer assess the clarity and organization of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper makes comparison with techniques used in active learning (AL), domain shift detection (DS), and multi-domain sampling to combine data from multiple sources. The experiments are conducted on datasets from questions answering and sentiment analysis.
The paper is well organized and easy to follow. However, the contribution of this paper is not clear. Specifically, I would expect authors provide more detailed recommendation for AL, DS, and multi-domain sampling in terms of sampling techniques, and population of different sources for certain application.
My another is concern is that the motivation of the experimental design is not clear. Why authors consider questions answering and sentiment analysis as the applications? It requires more analysis about experimental results, such as Figure 1 and tables in Section D.
Review
This paper makes comparison with techniques used in active learning (AL), domain shift detection (DS), and multi-domain sampling to combine data from multiple sources. The experiments are conducted on datasets from questions answering and sentiment analysis.
The paper is well organized and easy to follow. However, the contribution of this paper is not clear. Specifically, I would expect authors provide more detailed recommendation for AL, DS, and multi-domain sampling in terms of sampling techniques, and population of different sources for certain application.
My another is concern is that the motivation of the experimental design is not clear. Why authors consider questions answering and sentiment analysis as the applications? It requires more analysis about experimental results, such as Figure 1 and tables in Section D. |
ICLR | Title
Active Learning over Multiple Domains in Natural Language Tasks
Abstract
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt the target domain. We survey a wide variety of techniques in active learning (AL), domain shift detection (DS), and multi-domain sampling to examine this challenging setting for question answering and sentiment analysis. We ask (1) what family of methods are effective for this task? And, (2) what properties of selected examples and domains achieve strong results? Among 18 acquisition functions from 4 families of methods, we findHDivergence methods, and particularly our proposed variant DAL-E, yield effective results, averaging 2-3% improvements over the random baseline. We also show the importance of a diverse allocation of domains, as well as room-for-improvement of existing methods on both domain and example selection. Our findings yield the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks.
1 INTRODUCTION
New natural language problems, outside the watershed of core NLP, are often strictly limited by a dearth of labeled data. While unlabeled data is frequently available, it is not always from the same source as the target distribution. This is particularly prevalent for tasks characterized by (i) significant distribution shift over time, (ii) personalization for user subgroups, or (iii) different collection mediums (see examples in Section A).
A widely-used solution to this problem is to bootstrap a larger training set using active learning (AL): a method to decide which unlabeled training examples should be labeled on a fixed annotation budget (Cohn et al., 1996; Settles, 2012). However, most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data (Dor et al., 2020). This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning. In this realistic setting, there are multiple sources of data (i.e. domains) to consider. In this case, it’s unclear whether to optimize for homogeneity or heterogeneity of selected examples. Secondly, is it more effective to allocate an example budget per domain, or treat examples as a single unlabeled pool? Where active learning baselines traditionally select examples the model is least confident on (Settles, 2009), in this setting it could lead to distracting examples from very dissimilar distributions.
In this work we empirically examine four separate families of methods (uncertainty-based, HDivergence, reverse classification accuracy, and semantic similarity detection) over several question answering and sentiment analysis datasets, following (Lowell et al., 2019; Elsahar & Gallé, 2019b), to provide actionable insights to practitioners facing this challenging variant of active learning for natural language. We address the following questions:
1. What family of methods are effective for multi-domain active learning? 2. What properties of the example and domain selection yield strong results?
While previous work has investigated similar settings (Saha et al., 2011; Liu et al., 2015; Zhao et al., 2021; Kirsch et al., 2021) we contribute, to our knowledge, the first rigorous formalization and broad survey of methods within NLP. We find that many families of techniques for active learning and
domain shift detection fail to reliably beat random baselines in this challenging variant of active learning, but certain H-Divergence methods are consistently strong. Our analysis identifies stark dissimilarities of these methods’ example selection, and suggests domain diversity is an important factor in achieving strong results. These results may serve as a guide to practitioners facing this problem, suggesting particular methods that are generally effective and properties of strategies that increase performance.
2 RELATED WORK
Active Learning in NLP Lowell et al. (2019) shows how inconsistent active learning methods are in NLP, even under regular conditions. However, Dor et al. (2020); Siddhant & Lipton (2018) survey active learning methods in NLP and find notable gains over random baselines. Kouw & Loog (2019) survey domain adaptation without target labels, similar to our setting, but for non-language tasks. We reference more active learning techniques in Section 4.
Domain Shift Detection Elsahar & Gallé (2019b) attempt to predict accuracy drops due to domain shifts and Rabanser et al. (2018) surveys different domain shift detection methods. Arora et al. (2021) examine calibration and density estimation for textual OOD detection.
Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts, though mainly in image classification, with single source and target domains. Kirsch et al. (2021) finds that BALD, which is often considered the state of the art for unshifted domain settings, can get stuck on irrelevant source domain or junk data. Zhao et al. (2021) investigates label shift, proposing a combination of predicted class balanced subsampling and importance weighting. Saha et al. (2011), whose approach corrects joint distribution shift, relies on the covariate shift assumption. However, in practical settings, there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold.
Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering (Talmor & Berant, 2019; Fisch et al., 2019; Longpre et al., 2019; Kamath et al., 2020) and classification tasks (Ruder & Plank, 2018; Sheoran et al., 2020). Ruder & Plank (2017) show the benefits of both data similarity and diversity in transfer learning. Rücklé et al. (2020) find that sampling from a widevariety of source domains (data scale) outperforms sampling similar domains in question answering. He et al. (2021) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains, focusing on robustness across domains.
3 MULTI-DOMAIN ACTIVE LEARNING
Suppose we have multiple domains D1, D2, ..., Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS =
⋃ i 6=T Di.
Given:
• Target: Small samples of labeled data points (x, y) from the target domain. DtrainT , D dev T , D test T ∼ DT .2 • Source: A large sample of unlabeled points (x) from the source domains. DS =
⋃ i6=T Di
Task:
1. Choose n samples from DS to label. DchosenS ⊂ DS , |DchosenS | = n, selected by argmaxx∈DS Af (x) where Af is an acquisition function: a policy to select unlabeled examples from DS for labeling. 2. Train a model M on Dfinal−train, validating on DdevT . Dfinal−train = DtrainT ∪DchosenS
3. Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others. 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled, in-domain training data for active
learning scenarios.
For Step 1, the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples, then evaluated on DtestT to demonstrate Af ’s ability to choose relevant out-of-distribution training examples.
4 METHODS
We identify four families of methods relevant to active learning over multiple shifted domains. Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model; H-Divergence techniques train classifiers for domain shift detection; Semantic Similarity Detection finds data points similar to points in the target domain; and Reverse Classification Accuracy approximates the benefit of training on a dataset. A limitation of our work is we do not cover all method families, such as domain adaptation, just those we consider most applicable. We derive ∼18 active learning variants, comprising the most prevalent and effective from prior work, and novel extensions/variants of existing paradigms for the multi-domain active learning setting (see KNN, R̃CA and DAL-E).
Furthermore, we split the families into two acquisition strategies: Single Pool Strategy and Domain Budget Allocation. Single Pool Strategy, comprising the first three families of methods, treats all examples as coming from one single unlabeled pool. Domain Budget Allocation, consisting of Reverse Classification Accuracy methods, simply allocate an example budget for each domain.
We enumerate acquisition methods Af below. Each method produces a full ranking of examples in the source set DS . To rank examples, most acquisition methods train an acquisition model, MA, using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN, which split DtrainT into two equal segments, one for training MA and one for an internal model. Some methods have both ascending and descending orders of these rankings (denoted by ↑ and ↓ respectively, in the method abbreviations), to test whether similar or distant examples are preferred in a multi-domain setting.
Certain methods use vector representations of candidate examples. We benchmark with both taskagnostic and task-specific encoders. The task-agnostic embeddings are taken from the last layer’s CLS token in Reimers & Gurevych (2019)’s sentence encoder (Appendix for details). The task-specific embeddings are taken from the last layer’s CLS token in the trained model MA.
The motivation of the task-specific variant is that each example’s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “∗” in their abbreviation. Otherwise, they use task-agnostic vectors.
4.1 UNCERTAINTY METHODS
These methods measure the uncertainty of a trained model on a new example. Uncertainty can reflect either aleatoric uncertainty, due to ambiguity inherent in the example, or epistemic uncertainty, due to limitations of the model (Kendall & Gal, 2017). For the following methods, let Y be the set of all possible labels produced from the model M(x) and ly be the logit value for y ∈ Y .
Confidence (CONF) A model’s confidence P (y|x) in its prediction y estimates the difficulty or unfamiliarity of an example (Guo et al., 2017; Elsahar & Gallé, 2019a).
Entropy (ENTR) Entropy applies Shannon entropy (Shannon, 1948) to the full distribution of class probabilities for each example, formalized as AENTR.
ACONF(x,MA) = −max(P (y|x)) AENTR(x,MA) = − |Y |∑ i=1 P (yi|x) · logP (yi|x)
Energy-based Out-of-Distribution Detection (ENG) Liu et al. (2020) use an energy-based score to distinguish between in- and out-distribution examples. They demonstrate this method is less susceptible to overconfidence issues of softmax approaches.
3For instance, consider in one domain every example is prefixed with “Text:” while the other is not — telling the difference is trivial, but the examples could be near-identical with respect to the task.
Bayesian Active Learning by Disagreement (BALD) Gal & Ghahramani (2016) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes, each with a distinct dropout mask. BALD isolates epistemic uncertainty, as the model would theoretically produce stable predictions over inference passes given sufficient capacity. We conduct T = 20 forward passes on x. ŷt = argmaxiP (yi|x)t, representing the predicted class on the t-th model pass on x. Following (Lowell et al., 2019), ties are broken by taking the mean label entropy over all T runs.
AENG(x,MA) = − log ∑ y∈Y ely ABALD(x,MA) = 1− count(modet∈T (ŷt)) T
4.2 H-DIVERGENCE METHODS
Ben-David et al. (2006; 2010) formalize the divergence between two domains as theH-Divergence, which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning (DAL) applies this concept to the active learning setting (Gissin & Shalev-Shwartz, 2019).
We explore variants of DAL, using an XGBoost decision tree (Chen & Guestrin, 2016) as the discriminator model g.5 For the following methods, let Dtrain−BT be the 1k examples from D train T that were not used to train MA. Let E be an encoder function, which can be task-specific or agnostic as described above. We use samples both fromDtrain−BT andDS to train the discriminator. We assign samples origin labels l, which depend on the DAL variant. Samples from DS with discriminator predictions closest to 1 are selected for labeling. The acquisition scoring function for each DAL method and training set definition, respectively, are:
ADAL(x, g, E) = g(E(x)) {(E(x), l) | x ∈ Dtrain−BT ∪DS}
Discriminative Active Learning — Target (DAL-T) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T, l = 1Dtrain−BT (x).
Discriminative Active Learning — Error (DAL-E) DAL-E is a novel variant of DAL. DALE’s approach is to find examples that are similar to those in the target domain that MA misclassified. We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E, l = 1DerrT (x).
4.3 REVERSE CLASSIFICATION ACCURACY
RCA Reverse Classification Accuracy (RCA) estimates how effective source set Di,i∈S is as a training data for target test set DT (Fan & Davidson, 2006; Elsahar & Gallé, 2019b). Without gold labels for Di we compute soft labels instead, using the BERT-Base MA trained on the small labeled set DtrainT . We then train a child model Mi on Di using these soft labels, and evaluate the child model on DdevT . RCA chooses examples randomly from whichever domain i produced the highest score si.
ARCA = 1D(arg maxi∈S si)(x) R̃CA : τi = si
sT − si , |Dchoseni | = τi∑ j sj
RCA-Smoothed (R̃CA) Standard RCA only selects examples from one domain Di. We develop a novel variant which samples from multiple domains, proportional to their relative performance on the target domain DdevT . RCA-smoothed (R̃CA) selects |Dchoseni | examples from source domain i, based on the relative difference between the performance si (of child model Mi trained on domain i with pseudo-labels from MA) on the target domain, and the performance sT of a model trained directly on the target domain DdevT . Since these strategies directly estimates model performance on the target domain resulting from training on each source domain, RCA and R̃CA are strong Domain Budget Allocation candidates.
4.4 NEAREST NEIGHBOUR / SEMANTIC SIMILARITY DETECTION (KNN)
Nearest neighbour methods (KNN) are used to find examples that are semantically similar. Using sentence encoders we can search the source set DS to select the top k nearest examples by cosine similarity to the target set. We represent the target set as the mean embedding of DtrainT . For question answering, where an example contains two sentences (the query and context), we refer to KNN-Q where we only encode the query text, KNN-C where we only encode the context text, or KNN-QC where we encode both concatenated together. The acquisition scoring function per example, uses either a task-specific or task-agnostic encoder E:
AKNN(x,E) = CosSim(E(x),Mean(E(DtrainT ))
5 EXPERIMENTS
Experiments are conducted on two common NLP tasks: question answering (QA) and sentiment analysis (SA), each with several available domains.
Question Answering We employ 6 diverse QA datasets from the MRQA 2019 workshop (Fisch et al., 2019), shown in Table 1 (left).6 We sample 60k examples from each dataset for training, 5k for validation, and 5k for testing. Questions and contexts are collected with varying procedures and sources, representing a wide diversity of datasets.
Sentiment Analysis For the sentiment analysis classification task, we follow (Blitzer et al., 2007) and (Ruder & Plank, 2018) by randomly selecting 6 Amazon multi-domain review datasets, as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 7 Altogether, these datasets exhibit wide diversity based on review length and topic (see Table 1). We normalize all datasets to have 5 sentiment classes: very negative, negative, neutral, positive, and very positive. We sample 50k examples for training, 5k for validation, and 5k for testing.
Experimental Setup To evaluate methods for the multi-domain active learning task, we conduct the experiment described in Section 3 for each acquisition method, rotating each domain as the target set. Model M , a BERT-Base model (Devlin et al., 2019), is chosen via hyperparameter grid search over learning rate, number of epochs, and gradient accumulation. The large volume of experiments entailed by this search space limits our capacity to benchmark performance variability due to isolated factors (the acquisition method, the target domain, or fine-tuning final models). However, our hyperparameter search closely mimics the process of an ML practitioner looking to select a best method and model, so we believe our experiment design captures a fair comparison among methods. See Algorithm 1 in Appendix Section B for full details.
4The approximation is also referred to as Proxy A-Distance (PAD) from (Elsahar & Gallé, 2019b) 5Hyperparameter choices and training procedures are detailed in the Appendix. 6The workshop pre-processed all datasets into a similar format, for fully answerable, span-extraction QA: https://github.com/mrqa/MRQA-Shared-Task-2019. 7https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
6.1 COMPARING ACQUISITION METHODS
Results in Figure 1 show the experiments described in Section 5: benchmarking each acquisition method for multi-domain active learning. We observe for both question answering (QA) and sentiment analysis (SA), most methods manage to outperform the no-extra-labelled data baseline (0% at the y-axis) and very narrowly outperform the random selection baseline (red line). Consistent with prior work (Lowell et al., 2019), active learning strategies in NLP have brittle and inconsistent improvements over random selection. Our main empirical findings, described in this section, include:
• H-Divergence, and particularly DAL-E variants, consistently outperform baselines and other families of methods.
• The ordering of examples in Uncertainty methods depend significantly on the diversity in source domains. BALD variants perform best among available options.
• Task-agnostic representations, used in KNN or DAL variants, provide consistently strong results on average, but task-specific representations significantly benefit certain target sets.
• Different families of methods rely on orthogonal notions of relevance in producing their example rankings.
H-Divergence methods categorically achieved the highest and most reliable scores, both as a family and individual methods, represented in the top 3 individual methods 11 / 18 times for QA, and 20 / 24 times for SA. For QA, BALD↑ and DAL-E∗ had the best mean and median scores respectively, and for SA DAL-E achieved both the best mean and median scores. Among these methods, our proposed DAL-E variants routinely outperform DAL-T variants by a small margin on average, with equivalent training and tuning procedures. We believe this is because DAL-E captures both notions of domain similarity and uncertainty. By design it prioritizes examples that are similar to in-domain samples, but also avoids those which are uninformative, because the model already performs well on them.
Among Uncertainty methods, for SA methods which select for higher uncertainty vastly outperformed those which selected for low uncertainty. The opposite is true for QA. This suggests the
diversity of QA datasets contain more extreme (harmful) domain shift than the (mostly Amazon-based) SA datasets.8 In both settings, the right ordering of examples with BALD (epistemic uncertainty) achieves the best results in this family of methods, over the others, which rely on total uncertainty.
Among Reverse Classification Accuracy methods, our R̃CA variant also noticeably outperforms standard RCA and most other methods, aside from DAL and BALD. Combining R̃CA with an example ranking method is a promising direction for future work, given the performance it achieves selecting examples randomly as a Domain Budget Allocation strategy.
Lastly, the Semantic Similarity Detection set of methods only rarely or narrowly exceed random selection. Intuitively, task-agnostic representations (KNN) outperform KNN∗, given the task-agnostic sentence encoder was optimized for cosine similarity.
Embedding Ablations To see the effects of embedding space on KNN and DAL, we used both a task-specific and task-agnostic embedding space. While a task-specific embedding space reduces the examples to features relevant for the task, a task-agnostic embedding space produces generic notions of similarity, unbiased by the task model.
According to Figure 1, KNN outperforms KNN∗. In the QA setting, KNN∗’s median is below the random baseline’s. In both plots, KNN∗’s whiskers extend below 0, indicating that in some cases the method actually chooses source examples that are harmful to target domain performance.
For DAL methods, task-agnostic and task-specific embeddings demonstrated mostly similar median performances. Notably, the boxes and whiskers are typically longer for task-specific methods than task-agnostic methods. This variability indicates certain target datasets may benefit significantly from task-specific embeddings, though task-agnostic embeddings achieve more consistent results.
Comparing Example Rankings For each setting, we quantify how similar acquisition methods rank examples from DS . In Figure 2, for each pair of methods, we calculate the Kendall’s Tau coefficient between the source example rankings chosen for a target domain, then average this coefficient over the target domains. Kendall’s Tau gives a scores [−1, 1], with -1 meaning perfect anti-correlation, 0 meaning no correlation, and 1 meaning perfect correlation between the rankings. Methods from different families show close to no relationship, even if they achieve similar performances, suggesting each family relies on orthogonal notions of similarity to rank example relevance. This suggests there is potential for combining methods from different families for this task in future work.
In Sentiment tasks, all uncertainty methods had highly correlated examples. In QA, ENTR had little correlation with any method. This is likely due to the significantly larger output space for QA models. Compared to only 5 label classes in SA, question answering models distribute their start and end confidences over sequences of up to 512, where there can be multiple valid answer candidates. Embedding space also largely influences the examples that methods chose. DAL methods had higher correlations with each other when they share the same embedding space; i.e. DAL-E’s ranking has a higher correlation with DAL-T than with DAL-E∗.
6.2 PROPERTIES OF OPTIMAL EXAMPLE SELECTION
We examine three properties of optimally selected examples: (i) whether selecting from many diverse or one single domain leads to better performance, (ii) whether the selection of a domain or the individual examples matters more to performance, and (iii) whether selection strategies can benefit from source domain information rather than treating samples as drawn from a single pool? Our findings regarding properties of optimal selection, as described in this section, include:
• Selecting a diversity of domains usually outperforms selecting examples from a single domain.
• Acquisition functions such as DAL-E∗ do rely on example selection, mainly to avoid the possibility of large negative outcomes.
• Domain Budget Allocation during selection may improve performance. Surprisingly, even random selection from an “optimal” balance of domains beats our best performing acquisition methods most of the time.
8Accordingly, we attempt to derive a relationship between domain distance and method performance in Appendix F, but find intuitive calculations of domain distance uninterpretable.
Are Many Diverse or One Single Domain Preferable? To answer this question we conduct a full search over all combinations of source datasets. For each target set, we fix 2k in-domain data points and sample all combinations of other source sets in 2k increments, such that altogether there are 10k training data points. For each combination of source sets, we conduct a simple grid search, randomly sampling the source set examples each time, and select the best model, mimicking standard practice among practitioners.
The result is a comprehensive search of all combinations of source sets (in 2k increments) up to 10k training points, so we can rank all combinations of domains per target, by performance. Tables 2a and 2b show the optimal selections, even as discrete as 2k increments, typically select at least two or more domains to achieve the best performance. However, 1 of 6 targets for QA, or 2 of 8 for the SA tasks achieve better results selecting all examples from a single domain, suggesting this is a strong baseline, if the right source domain is isolated. We also report the mean score of all permutations to demonstrate the importance of selecting the right set of domains over a random combination.
Domains or Examples? Which is more important, to select the right domains or the right examples within some domain? From the above optimal search experiment we see selecting the right combination of domains regularly leads to strong improvements over a random combination of domains. Whether example selection is more important than domain selection may vary depending on the example variety within each domain. We narrow our focus to how much example selection plays a role for one of the stronger acquisition functions: DAL-E∗. We fix the effect of domain selection (the number of examples from each domain) but vary which examples are specifically selected. Using DAL-E∗’s distribution of domains, we compare the mean performance of models trained on it’s highest ranked examples against a random set of examples sampled from those same domains. We find a +0.46 ± 0.25% improvement for QA, and +0.12 ± 0.19% for SA. We also compare model performances trained on random selection against the lowest ranked examples by DAL-E∗. Interestingly, we see a -1.64 ± 0.37% performance decrease for QA, and -1.46 ± 0.56% decrease for sentiment tasks. These results suggest that example selection is an important factor beyond domain selection, especially for avoiding bad example selections.
Single Pool or Domain Budget Allocation Does using information about examples’ domains during selection lead to better results than treating all examples as coming from a single unlabeled pool? Originally, we hypothesized Single Pool Strategy methods would perform better on smaller budget sizes as they add the most informative data points regardless of domain. On the other hand, we thought that if the budget size is large, Domain Budget Allocation would perform best, as they choose source domains closest to the target domain. Based on Tables 1b and 1a, we were not able to draw conclusions about this hypothesis, as each sample size n = {8000, 18000, 28000} produced roughly similar winning methods. Future work should include a wider range of budget sizes with larger changes in method performance between sizes.
In our main set of experiments, the RCA acquisition functions follow the Domain Budget Allocation strategy, while all other acquisition functions follow the Single Pool strategies. Based on median performance, R̃CA outperformed all other methods (we’re including BALD here due to inconsistency in performance between QA and SA) except for those in theH-Divergence family. This suggests that using domain information during selection can lead to performance gains.
The Optimal Domain Search experiments, shown in Tables 2a and 2b, further suggest that allocating a budget from each domain can improve performance. For 8 out of our 14 experiments, selecting random samples according to the optimal domain distribution outperform any active learning strategy. While the optimal domain distributions were not computed a priori in our experiments, this result shows the potential for Domain Budget Allocation strategies. Future work could reasonably improve our results by developing an acquisition function that better predicts the optimal domain distributions than R̃CA, or to even have greater performance gains by budgeting each domain, then applying an active learning strategy (e.g. DAL-E) within each budget.
7 CONCLUSION
We examine a challenging variant of active learning where target data is scarce, and multiple shifted domains operate as the source set of unlabeled data. For practitioners facing multi-domain active learning, we benchmark 18 acquisition functions, demonstrating theH-Divergence family of methods and our proposed variant DAL-E achieve the best results. Our analysis shows the importance of example selection in existing methods, and also the surprising potential of domain budget allocation strategies. Combining families of methods, or trying domain adaptation techniques on top of selected example sets, offer promising directions for future work.
A MULTI-DOMAIN ACTIVE LEARNING TASK
In this section, we would enumerate real-world settings in which a practitioner would be interested in multi-domain active learning methods. We expect this active learning variant to be applicable to cold starts, rare classes, personalization, and settings where the modelers are constrained by privacy considerations, or a lack of labelers with domain expertise.
• In the cold start scenario, for a new NLP problem, there is often little to no target data available yet (labeled or unlabelled), but there are related sources of unlabelled data to try. Perhaps an engineer has collected small amounts of training data from an internal population. Because the data size is small, the engineer is considering out-of-domain samples, collected from user studies, repurposed from other projects, scraped from the web, etc..
• In the rare class scenario, take an example of a new platform/forum/social media company classifying hate speech against a certain minority group. Perhaps the prevalence of positive, in-domain samples on the social media platform is small, so an engineer uses out-domain samples from books, other social media platforms, or from combing the internet.
• In a personalization setting, like spam filtering or auto-completion on a keyboard, each user may only have a couple hundred of their own samples, but out-domain samples from other users may be available in greater quantities.
• In the privacy constrained setting, a company may collect data from internal users, user studies, and beta testers; however, a commitment to user privacy may incentivize the company to keep the amount of labeled data from the target user population low.
• Lastly, labeling in-domain data may require certain domain knowledge, which would lead to increased expenses and difficulty in finding annotators. As an example, take a text classification problem in a rare language. It may be easy to produce out-domain samples by labeling English text and machine translating it to the rare language, whereas generating in-domain labeled data would require annotators who are fluent in the rare language.
In each of these settings, target distribution data may not be amply available, but semi-similar unlabelled domains often are. This rules out many domain adaptation methods that rely heavily on unlabelled target data.
We were able to simulate the base conditions of this problem with sentiment analysis and question answering datasets, since they are rich in domain diversity. We believe these datasets are reasonable proxies to represent the base problem, and yield general-enough insights for a practitioner starting on this problem.
B REPRODUCIBILITY
B.1 DATASETS AND MODEL TRAINING
We choose question answering and sentiment analysis tasks as they are core NLP tasks, somewhat representative of many classification and information-seeking problems. Multi-domain active learning is not limited to any subset of NLP tasks, so we believe these datasets are a reasonable proxie for the problem.
For question answering, the MRQA shared task (Fisch et al., 2019) includes SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).
For the sentiment analysis classification task, we use Amazon datasets following (Blitzer et al., 2007) and (Ruder & Plank, 2018), as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 9 Both question answering and sentiment analysis datasets are described in Table 1.
For reproducibility, we share our hyper-parameter selection in Table 3. Hyper-parameters are taken from Longpre et al. (2019) for training all Question Answering (QA) models since their parameters
9https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
are tuned for the same datasets in the MRQA Shared Task. We found these choices to provide stable and strong results across all datasets. For sentiment analysis, we initially experimented on a small portion of the datasets to arrive at a strong set of base hyper-parameters to tune from.
Our BERT question answering modules build upon the standard PyTorch (Paszke et al., 2019) implementations from HuggingFace, and are trained on one NVIDIA Tesla V100 GPU.10.
B.2 EXPERIMENTAL DESIGN
For more detail regarding the experimental design we include Algorith 1, using notation described in the multi-domain active learning task definition.
Algorithm 1 EXPERIMENTAL DESIGN 1: for each Acquisition Function Af do 2: for each Target set DT ∼ D do 3: DtrainT , D dev T , D test T ∼ DT
4: DS := {x ∈ D | x /∈ DT } 5: MA ← TRAIN(DtrainT , DdevT ) 6: Dchosen ← [Rankx∈DSAf (x,MA)][: n] 7: Dfinal−train = DtrainT ∪Dchosen 8: M ← GRIDSEARCH(Dfinal−train, DdevT ) 9: (Af , DT ) = s Af T ←M(D test T )
10: end for 11: end for 12: return Scores Dictionary (Af , DT )→ s Af T
C ACQUISITION FUNCTIONS
C.1 TASK AGNOSTIC EMBEDDINGS
To compute the semantic similarity between two examples, we computed the example embeddings using the pre-trained model from a sentence-transformer (Reimers & Gurevych, 2019). We used the
10https://github.com/huggingface/transformers
RoBERTa large model, which has 24 layers, 1024 hidden layers, 16 heads, 355M parameters, and fine tuning on the SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and STSBenchmark (Cer et al., 2017) datasets. Its training procedure is documented in https://www.sbert.net/ examples/training/sts/README.html.
C.2 BAYESIAN ACTIVE LEARNING BY DISAGREEMENT (BALD)
We note that Siddant and Lipton’s presentation of BALD is more closely related to the Variation Ratios acquisition function described in Gal et al. (2017) than the description of dropout as a Bayesian approximation given in Gal & Ghahramani (2016). In particular, Gal et al. (2017) found that Variation Ratios performed on par or better than Houlsby’s BALD on MNIST but was less suitable for ISIC2016.
C.3 DISCRIMINATIVE ACTIVE LEARNING MODEL (DAL) TRAINING
DAL’s training set is created using the methods detailed in Section 4.2. The training set is then partitioned into five equally sized folds. In order to predict on data that is not used to train the discriminator, we use 5-fold cross validation. The model is trained on four folds, balancing the positive and negative classes using sample weights. The classifier then predicts on the single heldout fold. This process is repeated five times so that each example is in the held out fold exactly once. Custom model parameters are shown in Table 4; model parameters not shown in the table are the default XGBClassifier parameters in xgboost 1.0.2. The motivations for choice in model and architecture are the small amount of target domain examples requiring a simple model to prevent overfitting and the ability of decision trees to capture collective interactions between features.
D FULL METHOD PERFORMANCES
We provide a full breakdown of final method performances in Tables 5 and 6.
E KENDALL’S TAU
E.1 DEFINITION
Kendall’s Tau is a statistic that measures the rank correlation between two quantities. Let X and Y be random variables with (x1, y1), (x2, y2), ..., (xn, yn) as observations drawn from the joint distribution. Given a pair (xi, yi) and (xj , yj), where i 6= j, we have: yj−yi xj−xi > 0 : pair is concordant
yj−yi xj−xi < 0 : pair is discordant
yj−yi xj−xi = 0 : pair is a tie
Let nc be the number of concordant pairs and nd the number of discordant pairs. Let ties add 0.5 to the concordant and discordant pair counts each. Then, Kendall’s Tau is computed as:11
τ = nc−ndnc+nd
E.2 INTER-FAMILY COMPARISON
Here, we extend on our comparison of example rankings by presenting plots of Kendall Tau scores normalized by intra-family scores in 3. For the sentiment setting, the ranges of intra-family Kendall Tau coefficients are smaller than the MRQA setting. Methods in the uncertainty family have especially strong correlations with each other and much weaker with methods outside of the family. For Hdivergence based methods, intra-family correlations are not’t as strong as for the uncertainty family;
11https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/ kendell.htm
in fact, the Kendall Taus between DAL-E/KNN and DAL-T/KNN appear to be slightly within the H-divergence intra-family range.
Furthermore, intra-family ranges are quite large for all families in the MRQA setting. For each method, there is at least one other method from a different family with which it had a higher Kendall Tau coefficient than the least similar methods of its own family.
F RELATING DOMAIN DISTANCES TO PERFORMANCE
We investigated why certain methods work better than others. One hypothesis is that there exists a relationship between between target-source domain distances and method performance. We estimated the distance between two domains by computing the Wasserstein distance between random samples of 3k example embeddings from each domain. We experimented with two kinds of example embeddings: 1. A task agnostic embedding computed by the sentence transformer used in the KNN method, and 2. A task specific embedding computed by a model trained with the source domain used in the DAL∗ method. Given that there are k − 1 source domains for each target domain, we tried aggregating domain distances over its mean, minimum, maximum, and variance to see if Wasserstein domain distances could be indicative of relative performance across all methods.
Figure 4, Figure 5, Figure 6, and Figure 7 each show, for a subset of methods, the relationship between each domain distance aggregation and the final performance gap between the best performing method. Unfortunately, we found no consistent relationship for both MRQA and the sentiment classification tasks. We believe that this result arose either because our estimated domain distances were not reliable measures of domain relevance, or because the aggregated domain distances are not independently sufficient to discern relative performance differences across methods.
Figure 5: Minimum Wasserstein domain distance vs method performance.
Figure 6: Maximum Wasserstein domain distance vs method performance. | 1. What is the focus of the paper regarding active learning?
2. What are the strengths of the paper, particularly in its thorough study and experimental analysis?
3. What are the weaknesses of the paper, especially concerning technical novelty and connection to prior works?
4. Do you have any concerns about the paper's contribution to the field of active learning?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies active learning with multiple domains to select examples for two natural language tasks, sentiment analysis and question answering. By benchmarking 18 acquisition strategies from 4 families, the paper shows that overall the H-Divergence methods achieves the best improvements overall (including a proposed variant DAL-E, which considers both domain selection and example selection).
Review
Unlike most of previous work on active learning (AL) focusing on a single domain, this paper targets the multi-domain AL setting, a more realistic setting. The paper presents an extremely thorough study of how different active learning methods (18 acquisition functions from 4 different families of methods) and rigorous experiments & analysis (correlation analysis among rankings of examples, searching for the best combination of source domains) provides insights to several key questions regarding multi-domain AL.
Overall, the paper is very well written and I quite enjoyed reading it. However, with it being a great experimental report, my main concern is regarding the limited technical novelty of the work besides the two proposed simple variants (DAL-E derived from DAL-T, and RCA-smoothed derived from RCA, and all other are existing AL methods). Plus, part of the analysis on AL is also covered by previous work (e.g., Lowell et al., 2019).
Also in terms of providing a empirical study to multi-domain AL, it appears [1] also studies a similar problems and offers similar analysis. How much does the proposed work connect and differ from it? This is not discussed in the paper.
[1] Multi-Domain Active Learning: A Comparative Study. He et al. https://arxiv.org/pdf/2106.13516v1.pdf (June 2021) |
ICLR | Title
Active Learning over Multiple Domains in Natural Language Tasks
Abstract
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt the target domain. We survey a wide variety of techniques in active learning (AL), domain shift detection (DS), and multi-domain sampling to examine this challenging setting for question answering and sentiment analysis. We ask (1) what family of methods are effective for this task? And, (2) what properties of selected examples and domains achieve strong results? Among 18 acquisition functions from 4 families of methods, we findHDivergence methods, and particularly our proposed variant DAL-E, yield effective results, averaging 2-3% improvements over the random baseline. We also show the importance of a diverse allocation of domains, as well as room-for-improvement of existing methods on both domain and example selection. Our findings yield the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks.
1 INTRODUCTION
New natural language problems, outside the watershed of core NLP, are often strictly limited by a dearth of labeled data. While unlabeled data is frequently available, it is not always from the same source as the target distribution. This is particularly prevalent for tasks characterized by (i) significant distribution shift over time, (ii) personalization for user subgroups, or (iii) different collection mediums (see examples in Section A).
A widely-used solution to this problem is to bootstrap a larger training set using active learning (AL): a method to decide which unlabeled training examples should be labeled on a fixed annotation budget (Cohn et al., 1996; Settles, 2012). However, most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data (Dor et al., 2020). This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning. In this realistic setting, there are multiple sources of data (i.e. domains) to consider. In this case, it’s unclear whether to optimize for homogeneity or heterogeneity of selected examples. Secondly, is it more effective to allocate an example budget per domain, or treat examples as a single unlabeled pool? Where active learning baselines traditionally select examples the model is least confident on (Settles, 2009), in this setting it could lead to distracting examples from very dissimilar distributions.
In this work we empirically examine four separate families of methods (uncertainty-based, HDivergence, reverse classification accuracy, and semantic similarity detection) over several question answering and sentiment analysis datasets, following (Lowell et al., 2019; Elsahar & Gallé, 2019b), to provide actionable insights to practitioners facing this challenging variant of active learning for natural language. We address the following questions:
1. What family of methods are effective for multi-domain active learning? 2. What properties of the example and domain selection yield strong results?
While previous work has investigated similar settings (Saha et al., 2011; Liu et al., 2015; Zhao et al., 2021; Kirsch et al., 2021) we contribute, to our knowledge, the first rigorous formalization and broad survey of methods within NLP. We find that many families of techniques for active learning and
domain shift detection fail to reliably beat random baselines in this challenging variant of active learning, but certain H-Divergence methods are consistently strong. Our analysis identifies stark dissimilarities of these methods’ example selection, and suggests domain diversity is an important factor in achieving strong results. These results may serve as a guide to practitioners facing this problem, suggesting particular methods that are generally effective and properties of strategies that increase performance.
2 RELATED WORK
Active Learning in NLP Lowell et al. (2019) shows how inconsistent active learning methods are in NLP, even under regular conditions. However, Dor et al. (2020); Siddhant & Lipton (2018) survey active learning methods in NLP and find notable gains over random baselines. Kouw & Loog (2019) survey domain adaptation without target labels, similar to our setting, but for non-language tasks. We reference more active learning techniques in Section 4.
Domain Shift Detection Elsahar & Gallé (2019b) attempt to predict accuracy drops due to domain shifts and Rabanser et al. (2018) surveys different domain shift detection methods. Arora et al. (2021) examine calibration and density estimation for textual OOD detection.
Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts, though mainly in image classification, with single source and target domains. Kirsch et al. (2021) finds that BALD, which is often considered the state of the art for unshifted domain settings, can get stuck on irrelevant source domain or junk data. Zhao et al. (2021) investigates label shift, proposing a combination of predicted class balanced subsampling and importance weighting. Saha et al. (2011), whose approach corrects joint distribution shift, relies on the covariate shift assumption. However, in practical settings, there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold.
Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering (Talmor & Berant, 2019; Fisch et al., 2019; Longpre et al., 2019; Kamath et al., 2020) and classification tasks (Ruder & Plank, 2018; Sheoran et al., 2020). Ruder & Plank (2017) show the benefits of both data similarity and diversity in transfer learning. Rücklé et al. (2020) find that sampling from a widevariety of source domains (data scale) outperforms sampling similar domains in question answering. He et al. (2021) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains, focusing on robustness across domains.
3 MULTI-DOMAIN ACTIVE LEARNING
Suppose we have multiple domains D1, D2, ..., Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS =
⋃ i 6=T Di.
Given:
• Target: Small samples of labeled data points (x, y) from the target domain. DtrainT , D dev T , D test T ∼ DT .2 • Source: A large sample of unlabeled points (x) from the source domains. DS =
⋃ i6=T Di
Task:
1. Choose n samples from DS to label. DchosenS ⊂ DS , |DchosenS | = n, selected by argmaxx∈DS Af (x) where Af is an acquisition function: a policy to select unlabeled examples from DS for labeling. 2. Train a model M on Dfinal−train, validating on DdevT . Dfinal−train = DtrainT ∪DchosenS
3. Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others. 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled, in-domain training data for active
learning scenarios.
For Step 1, the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples, then evaluated on DtestT to demonstrate Af ’s ability to choose relevant out-of-distribution training examples.
4 METHODS
We identify four families of methods relevant to active learning over multiple shifted domains. Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model; H-Divergence techniques train classifiers for domain shift detection; Semantic Similarity Detection finds data points similar to points in the target domain; and Reverse Classification Accuracy approximates the benefit of training on a dataset. A limitation of our work is we do not cover all method families, such as domain adaptation, just those we consider most applicable. We derive ∼18 active learning variants, comprising the most prevalent and effective from prior work, and novel extensions/variants of existing paradigms for the multi-domain active learning setting (see KNN, R̃CA and DAL-E).
Furthermore, we split the families into two acquisition strategies: Single Pool Strategy and Domain Budget Allocation. Single Pool Strategy, comprising the first three families of methods, treats all examples as coming from one single unlabeled pool. Domain Budget Allocation, consisting of Reverse Classification Accuracy methods, simply allocate an example budget for each domain.
We enumerate acquisition methods Af below. Each method produces a full ranking of examples in the source set DS . To rank examples, most acquisition methods train an acquisition model, MA, using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN, which split DtrainT into two equal segments, one for training MA and one for an internal model. Some methods have both ascending and descending orders of these rankings (denoted by ↑ and ↓ respectively, in the method abbreviations), to test whether similar or distant examples are preferred in a multi-domain setting.
Certain methods use vector representations of candidate examples. We benchmark with both taskagnostic and task-specific encoders. The task-agnostic embeddings are taken from the last layer’s CLS token in Reimers & Gurevych (2019)’s sentence encoder (Appendix for details). The task-specific embeddings are taken from the last layer’s CLS token in the trained model MA.
The motivation of the task-specific variant is that each example’s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “∗” in their abbreviation. Otherwise, they use task-agnostic vectors.
4.1 UNCERTAINTY METHODS
These methods measure the uncertainty of a trained model on a new example. Uncertainty can reflect either aleatoric uncertainty, due to ambiguity inherent in the example, or epistemic uncertainty, due to limitations of the model (Kendall & Gal, 2017). For the following methods, let Y be the set of all possible labels produced from the model M(x) and ly be the logit value for y ∈ Y .
Confidence (CONF) A model’s confidence P (y|x) in its prediction y estimates the difficulty or unfamiliarity of an example (Guo et al., 2017; Elsahar & Gallé, 2019a).
Entropy (ENTR) Entropy applies Shannon entropy (Shannon, 1948) to the full distribution of class probabilities for each example, formalized as AENTR.
ACONF(x,MA) = −max(P (y|x)) AENTR(x,MA) = − |Y |∑ i=1 P (yi|x) · logP (yi|x)
Energy-based Out-of-Distribution Detection (ENG) Liu et al. (2020) use an energy-based score to distinguish between in- and out-distribution examples. They demonstrate this method is less susceptible to overconfidence issues of softmax approaches.
3For instance, consider in one domain every example is prefixed with “Text:” while the other is not — telling the difference is trivial, but the examples could be near-identical with respect to the task.
Bayesian Active Learning by Disagreement (BALD) Gal & Ghahramani (2016) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes, each with a distinct dropout mask. BALD isolates epistemic uncertainty, as the model would theoretically produce stable predictions over inference passes given sufficient capacity. We conduct T = 20 forward passes on x. ŷt = argmaxiP (yi|x)t, representing the predicted class on the t-th model pass on x. Following (Lowell et al., 2019), ties are broken by taking the mean label entropy over all T runs.
AENG(x,MA) = − log ∑ y∈Y ely ABALD(x,MA) = 1− count(modet∈T (ŷt)) T
4.2 H-DIVERGENCE METHODS
Ben-David et al. (2006; 2010) formalize the divergence between two domains as theH-Divergence, which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning (DAL) applies this concept to the active learning setting (Gissin & Shalev-Shwartz, 2019).
We explore variants of DAL, using an XGBoost decision tree (Chen & Guestrin, 2016) as the discriminator model g.5 For the following methods, let Dtrain−BT be the 1k examples from D train T that were not used to train MA. Let E be an encoder function, which can be task-specific or agnostic as described above. We use samples both fromDtrain−BT andDS to train the discriminator. We assign samples origin labels l, which depend on the DAL variant. Samples from DS with discriminator predictions closest to 1 are selected for labeling. The acquisition scoring function for each DAL method and training set definition, respectively, are:
ADAL(x, g, E) = g(E(x)) {(E(x), l) | x ∈ Dtrain−BT ∪DS}
Discriminative Active Learning — Target (DAL-T) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T, l = 1Dtrain−BT (x).
Discriminative Active Learning — Error (DAL-E) DAL-E is a novel variant of DAL. DALE’s approach is to find examples that are similar to those in the target domain that MA misclassified. We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E, l = 1DerrT (x).
4.3 REVERSE CLASSIFICATION ACCURACY
RCA Reverse Classification Accuracy (RCA) estimates how effective source set Di,i∈S is as a training data for target test set DT (Fan & Davidson, 2006; Elsahar & Gallé, 2019b). Without gold labels for Di we compute soft labels instead, using the BERT-Base MA trained on the small labeled set DtrainT . We then train a child model Mi on Di using these soft labels, and evaluate the child model on DdevT . RCA chooses examples randomly from whichever domain i produced the highest score si.
ARCA = 1D(arg maxi∈S si)(x) R̃CA : τi = si
sT − si , |Dchoseni | = τi∑ j sj
RCA-Smoothed (R̃CA) Standard RCA only selects examples from one domain Di. We develop a novel variant which samples from multiple domains, proportional to their relative performance on the target domain DdevT . RCA-smoothed (R̃CA) selects |Dchoseni | examples from source domain i, based on the relative difference between the performance si (of child model Mi trained on domain i with pseudo-labels from MA) on the target domain, and the performance sT of a model trained directly on the target domain DdevT . Since these strategies directly estimates model performance on the target domain resulting from training on each source domain, RCA and R̃CA are strong Domain Budget Allocation candidates.
4.4 NEAREST NEIGHBOUR / SEMANTIC SIMILARITY DETECTION (KNN)
Nearest neighbour methods (KNN) are used to find examples that are semantically similar. Using sentence encoders we can search the source set DS to select the top k nearest examples by cosine similarity to the target set. We represent the target set as the mean embedding of DtrainT . For question answering, where an example contains two sentences (the query and context), we refer to KNN-Q where we only encode the query text, KNN-C where we only encode the context text, or KNN-QC where we encode both concatenated together. The acquisition scoring function per example, uses either a task-specific or task-agnostic encoder E:
AKNN(x,E) = CosSim(E(x),Mean(E(DtrainT ))
5 EXPERIMENTS
Experiments are conducted on two common NLP tasks: question answering (QA) and sentiment analysis (SA), each with several available domains.
Question Answering We employ 6 diverse QA datasets from the MRQA 2019 workshop (Fisch et al., 2019), shown in Table 1 (left).6 We sample 60k examples from each dataset for training, 5k for validation, and 5k for testing. Questions and contexts are collected with varying procedures and sources, representing a wide diversity of datasets.
Sentiment Analysis For the sentiment analysis classification task, we follow (Blitzer et al., 2007) and (Ruder & Plank, 2018) by randomly selecting 6 Amazon multi-domain review datasets, as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 7 Altogether, these datasets exhibit wide diversity based on review length and topic (see Table 1). We normalize all datasets to have 5 sentiment classes: very negative, negative, neutral, positive, and very positive. We sample 50k examples for training, 5k for validation, and 5k for testing.
Experimental Setup To evaluate methods for the multi-domain active learning task, we conduct the experiment described in Section 3 for each acquisition method, rotating each domain as the target set. Model M , a BERT-Base model (Devlin et al., 2019), is chosen via hyperparameter grid search over learning rate, number of epochs, and gradient accumulation. The large volume of experiments entailed by this search space limits our capacity to benchmark performance variability due to isolated factors (the acquisition method, the target domain, or fine-tuning final models). However, our hyperparameter search closely mimics the process of an ML practitioner looking to select a best method and model, so we believe our experiment design captures a fair comparison among methods. See Algorithm 1 in Appendix Section B for full details.
4The approximation is also referred to as Proxy A-Distance (PAD) from (Elsahar & Gallé, 2019b) 5Hyperparameter choices and training procedures are detailed in the Appendix. 6The workshop pre-processed all datasets into a similar format, for fully answerable, span-extraction QA: https://github.com/mrqa/MRQA-Shared-Task-2019. 7https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
6.1 COMPARING ACQUISITION METHODS
Results in Figure 1 show the experiments described in Section 5: benchmarking each acquisition method for multi-domain active learning. We observe for both question answering (QA) and sentiment analysis (SA), most methods manage to outperform the no-extra-labelled data baseline (0% at the y-axis) and very narrowly outperform the random selection baseline (red line). Consistent with prior work (Lowell et al., 2019), active learning strategies in NLP have brittle and inconsistent improvements over random selection. Our main empirical findings, described in this section, include:
• H-Divergence, and particularly DAL-E variants, consistently outperform baselines and other families of methods.
• The ordering of examples in Uncertainty methods depend significantly on the diversity in source domains. BALD variants perform best among available options.
• Task-agnostic representations, used in KNN or DAL variants, provide consistently strong results on average, but task-specific representations significantly benefit certain target sets.
• Different families of methods rely on orthogonal notions of relevance in producing their example rankings.
H-Divergence methods categorically achieved the highest and most reliable scores, both as a family and individual methods, represented in the top 3 individual methods 11 / 18 times for QA, and 20 / 24 times for SA. For QA, BALD↑ and DAL-E∗ had the best mean and median scores respectively, and for SA DAL-E achieved both the best mean and median scores. Among these methods, our proposed DAL-E variants routinely outperform DAL-T variants by a small margin on average, with equivalent training and tuning procedures. We believe this is because DAL-E captures both notions of domain similarity and uncertainty. By design it prioritizes examples that are similar to in-domain samples, but also avoids those which are uninformative, because the model already performs well on them.
Among Uncertainty methods, for SA methods which select for higher uncertainty vastly outperformed those which selected for low uncertainty. The opposite is true for QA. This suggests the
diversity of QA datasets contain more extreme (harmful) domain shift than the (mostly Amazon-based) SA datasets.8 In both settings, the right ordering of examples with BALD (epistemic uncertainty) achieves the best results in this family of methods, over the others, which rely on total uncertainty.
Among Reverse Classification Accuracy methods, our R̃CA variant also noticeably outperforms standard RCA and most other methods, aside from DAL and BALD. Combining R̃CA with an example ranking method is a promising direction for future work, given the performance it achieves selecting examples randomly as a Domain Budget Allocation strategy.
Lastly, the Semantic Similarity Detection set of methods only rarely or narrowly exceed random selection. Intuitively, task-agnostic representations (KNN) outperform KNN∗, given the task-agnostic sentence encoder was optimized for cosine similarity.
Embedding Ablations To see the effects of embedding space on KNN and DAL, we used both a task-specific and task-agnostic embedding space. While a task-specific embedding space reduces the examples to features relevant for the task, a task-agnostic embedding space produces generic notions of similarity, unbiased by the task model.
According to Figure 1, KNN outperforms KNN∗. In the QA setting, KNN∗’s median is below the random baseline’s. In both plots, KNN∗’s whiskers extend below 0, indicating that in some cases the method actually chooses source examples that are harmful to target domain performance.
For DAL methods, task-agnostic and task-specific embeddings demonstrated mostly similar median performances. Notably, the boxes and whiskers are typically longer for task-specific methods than task-agnostic methods. This variability indicates certain target datasets may benefit significantly from task-specific embeddings, though task-agnostic embeddings achieve more consistent results.
Comparing Example Rankings For each setting, we quantify how similar acquisition methods rank examples from DS . In Figure 2, for each pair of methods, we calculate the Kendall’s Tau coefficient between the source example rankings chosen for a target domain, then average this coefficient over the target domains. Kendall’s Tau gives a scores [−1, 1], with -1 meaning perfect anti-correlation, 0 meaning no correlation, and 1 meaning perfect correlation between the rankings. Methods from different families show close to no relationship, even if they achieve similar performances, suggesting each family relies on orthogonal notions of similarity to rank example relevance. This suggests there is potential for combining methods from different families for this task in future work.
In Sentiment tasks, all uncertainty methods had highly correlated examples. In QA, ENTR had little correlation with any method. This is likely due to the significantly larger output space for QA models. Compared to only 5 label classes in SA, question answering models distribute their start and end confidences over sequences of up to 512, where there can be multiple valid answer candidates. Embedding space also largely influences the examples that methods chose. DAL methods had higher correlations with each other when they share the same embedding space; i.e. DAL-E’s ranking has a higher correlation with DAL-T than with DAL-E∗.
6.2 PROPERTIES OF OPTIMAL EXAMPLE SELECTION
We examine three properties of optimally selected examples: (i) whether selecting from many diverse or one single domain leads to better performance, (ii) whether the selection of a domain or the individual examples matters more to performance, and (iii) whether selection strategies can benefit from source domain information rather than treating samples as drawn from a single pool? Our findings regarding properties of optimal selection, as described in this section, include:
• Selecting a diversity of domains usually outperforms selecting examples from a single domain.
• Acquisition functions such as DAL-E∗ do rely on example selection, mainly to avoid the possibility of large negative outcomes.
• Domain Budget Allocation during selection may improve performance. Surprisingly, even random selection from an “optimal” balance of domains beats our best performing acquisition methods most of the time.
8Accordingly, we attempt to derive a relationship between domain distance and method performance in Appendix F, but find intuitive calculations of domain distance uninterpretable.
Are Many Diverse or One Single Domain Preferable? To answer this question we conduct a full search over all combinations of source datasets. For each target set, we fix 2k in-domain data points and sample all combinations of other source sets in 2k increments, such that altogether there are 10k training data points. For each combination of source sets, we conduct a simple grid search, randomly sampling the source set examples each time, and select the best model, mimicking standard practice among practitioners.
The result is a comprehensive search of all combinations of source sets (in 2k increments) up to 10k training points, so we can rank all combinations of domains per target, by performance. Tables 2a and 2b show the optimal selections, even as discrete as 2k increments, typically select at least two or more domains to achieve the best performance. However, 1 of 6 targets for QA, or 2 of 8 for the SA tasks achieve better results selecting all examples from a single domain, suggesting this is a strong baseline, if the right source domain is isolated. We also report the mean score of all permutations to demonstrate the importance of selecting the right set of domains over a random combination.
Domains or Examples? Which is more important, to select the right domains or the right examples within some domain? From the above optimal search experiment we see selecting the right combination of domains regularly leads to strong improvements over a random combination of domains. Whether example selection is more important than domain selection may vary depending on the example variety within each domain. We narrow our focus to how much example selection plays a role for one of the stronger acquisition functions: DAL-E∗. We fix the effect of domain selection (the number of examples from each domain) but vary which examples are specifically selected. Using DAL-E∗’s distribution of domains, we compare the mean performance of models trained on it’s highest ranked examples against a random set of examples sampled from those same domains. We find a +0.46 ± 0.25% improvement for QA, and +0.12 ± 0.19% for SA. We also compare model performances trained on random selection against the lowest ranked examples by DAL-E∗. Interestingly, we see a -1.64 ± 0.37% performance decrease for QA, and -1.46 ± 0.56% decrease for sentiment tasks. These results suggest that example selection is an important factor beyond domain selection, especially for avoiding bad example selections.
Single Pool or Domain Budget Allocation Does using information about examples’ domains during selection lead to better results than treating all examples as coming from a single unlabeled pool? Originally, we hypothesized Single Pool Strategy methods would perform better on smaller budget sizes as they add the most informative data points regardless of domain. On the other hand, we thought that if the budget size is large, Domain Budget Allocation would perform best, as they choose source domains closest to the target domain. Based on Tables 1b and 1a, we were not able to draw conclusions about this hypothesis, as each sample size n = {8000, 18000, 28000} produced roughly similar winning methods. Future work should include a wider range of budget sizes with larger changes in method performance between sizes.
In our main set of experiments, the RCA acquisition functions follow the Domain Budget Allocation strategy, while all other acquisition functions follow the Single Pool strategies. Based on median performance, R̃CA outperformed all other methods (we’re including BALD here due to inconsistency in performance between QA and SA) except for those in theH-Divergence family. This suggests that using domain information during selection can lead to performance gains.
The Optimal Domain Search experiments, shown in Tables 2a and 2b, further suggest that allocating a budget from each domain can improve performance. For 8 out of our 14 experiments, selecting random samples according to the optimal domain distribution outperform any active learning strategy. While the optimal domain distributions were not computed a priori in our experiments, this result shows the potential for Domain Budget Allocation strategies. Future work could reasonably improve our results by developing an acquisition function that better predicts the optimal domain distributions than R̃CA, or to even have greater performance gains by budgeting each domain, then applying an active learning strategy (e.g. DAL-E) within each budget.
7 CONCLUSION
We examine a challenging variant of active learning where target data is scarce, and multiple shifted domains operate as the source set of unlabeled data. For practitioners facing multi-domain active learning, we benchmark 18 acquisition functions, demonstrating theH-Divergence family of methods and our proposed variant DAL-E achieve the best results. Our analysis shows the importance of example selection in existing methods, and also the surprising potential of domain budget allocation strategies. Combining families of methods, or trying domain adaptation techniques on top of selected example sets, offer promising directions for future work.
A MULTI-DOMAIN ACTIVE LEARNING TASK
In this section, we would enumerate real-world settings in which a practitioner would be interested in multi-domain active learning methods. We expect this active learning variant to be applicable to cold starts, rare classes, personalization, and settings where the modelers are constrained by privacy considerations, or a lack of labelers with domain expertise.
• In the cold start scenario, for a new NLP problem, there is often little to no target data available yet (labeled or unlabelled), but there are related sources of unlabelled data to try. Perhaps an engineer has collected small amounts of training data from an internal population. Because the data size is small, the engineer is considering out-of-domain samples, collected from user studies, repurposed from other projects, scraped from the web, etc..
• In the rare class scenario, take an example of a new platform/forum/social media company classifying hate speech against a certain minority group. Perhaps the prevalence of positive, in-domain samples on the social media platform is small, so an engineer uses out-domain samples from books, other social media platforms, or from combing the internet.
• In a personalization setting, like spam filtering or auto-completion on a keyboard, each user may only have a couple hundred of their own samples, but out-domain samples from other users may be available in greater quantities.
• In the privacy constrained setting, a company may collect data from internal users, user studies, and beta testers; however, a commitment to user privacy may incentivize the company to keep the amount of labeled data from the target user population low.
• Lastly, labeling in-domain data may require certain domain knowledge, which would lead to increased expenses and difficulty in finding annotators. As an example, take a text classification problem in a rare language. It may be easy to produce out-domain samples by labeling English text and machine translating it to the rare language, whereas generating in-domain labeled data would require annotators who are fluent in the rare language.
In each of these settings, target distribution data may not be amply available, but semi-similar unlabelled domains often are. This rules out many domain adaptation methods that rely heavily on unlabelled target data.
We were able to simulate the base conditions of this problem with sentiment analysis and question answering datasets, since they are rich in domain diversity. We believe these datasets are reasonable proxies to represent the base problem, and yield general-enough insights for a practitioner starting on this problem.
B REPRODUCIBILITY
B.1 DATASETS AND MODEL TRAINING
We choose question answering and sentiment analysis tasks as they are core NLP tasks, somewhat representative of many classification and information-seeking problems. Multi-domain active learning is not limited to any subset of NLP tasks, so we believe these datasets are a reasonable proxie for the problem.
For question answering, the MRQA shared task (Fisch et al., 2019) includes SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).
For the sentiment analysis classification task, we use Amazon datasets following (Blitzer et al., 2007) and (Ruder & Plank, 2018), as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 9 Both question answering and sentiment analysis datasets are described in Table 1.
For reproducibility, we share our hyper-parameter selection in Table 3. Hyper-parameters are taken from Longpre et al. (2019) for training all Question Answering (QA) models since their parameters
9https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
are tuned for the same datasets in the MRQA Shared Task. We found these choices to provide stable and strong results across all datasets. For sentiment analysis, we initially experimented on a small portion of the datasets to arrive at a strong set of base hyper-parameters to tune from.
Our BERT question answering modules build upon the standard PyTorch (Paszke et al., 2019) implementations from HuggingFace, and are trained on one NVIDIA Tesla V100 GPU.10.
B.2 EXPERIMENTAL DESIGN
For more detail regarding the experimental design we include Algorith 1, using notation described in the multi-domain active learning task definition.
Algorithm 1 EXPERIMENTAL DESIGN 1: for each Acquisition Function Af do 2: for each Target set DT ∼ D do 3: DtrainT , D dev T , D test T ∼ DT
4: DS := {x ∈ D | x /∈ DT } 5: MA ← TRAIN(DtrainT , DdevT ) 6: Dchosen ← [Rankx∈DSAf (x,MA)][: n] 7: Dfinal−train = DtrainT ∪Dchosen 8: M ← GRIDSEARCH(Dfinal−train, DdevT ) 9: (Af , DT ) = s Af T ←M(D test T )
10: end for 11: end for 12: return Scores Dictionary (Af , DT )→ s Af T
C ACQUISITION FUNCTIONS
C.1 TASK AGNOSTIC EMBEDDINGS
To compute the semantic similarity between two examples, we computed the example embeddings using the pre-trained model from a sentence-transformer (Reimers & Gurevych, 2019). We used the
10https://github.com/huggingface/transformers
RoBERTa large model, which has 24 layers, 1024 hidden layers, 16 heads, 355M parameters, and fine tuning on the SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and STSBenchmark (Cer et al., 2017) datasets. Its training procedure is documented in https://www.sbert.net/ examples/training/sts/README.html.
C.2 BAYESIAN ACTIVE LEARNING BY DISAGREEMENT (BALD)
We note that Siddant and Lipton’s presentation of BALD is more closely related to the Variation Ratios acquisition function described in Gal et al. (2017) than the description of dropout as a Bayesian approximation given in Gal & Ghahramani (2016). In particular, Gal et al. (2017) found that Variation Ratios performed on par or better than Houlsby’s BALD on MNIST but was less suitable for ISIC2016.
C.3 DISCRIMINATIVE ACTIVE LEARNING MODEL (DAL) TRAINING
DAL’s training set is created using the methods detailed in Section 4.2. The training set is then partitioned into five equally sized folds. In order to predict on data that is not used to train the discriminator, we use 5-fold cross validation. The model is trained on four folds, balancing the positive and negative classes using sample weights. The classifier then predicts on the single heldout fold. This process is repeated five times so that each example is in the held out fold exactly once. Custom model parameters are shown in Table 4; model parameters not shown in the table are the default XGBClassifier parameters in xgboost 1.0.2. The motivations for choice in model and architecture are the small amount of target domain examples requiring a simple model to prevent overfitting and the ability of decision trees to capture collective interactions between features.
D FULL METHOD PERFORMANCES
We provide a full breakdown of final method performances in Tables 5 and 6.
E KENDALL’S TAU
E.1 DEFINITION
Kendall’s Tau is a statistic that measures the rank correlation between two quantities. Let X and Y be random variables with (x1, y1), (x2, y2), ..., (xn, yn) as observations drawn from the joint distribution. Given a pair (xi, yi) and (xj , yj), where i 6= j, we have: yj−yi xj−xi > 0 : pair is concordant
yj−yi xj−xi < 0 : pair is discordant
yj−yi xj−xi = 0 : pair is a tie
Let nc be the number of concordant pairs and nd the number of discordant pairs. Let ties add 0.5 to the concordant and discordant pair counts each. Then, Kendall’s Tau is computed as:11
τ = nc−ndnc+nd
E.2 INTER-FAMILY COMPARISON
Here, we extend on our comparison of example rankings by presenting plots of Kendall Tau scores normalized by intra-family scores in 3. For the sentiment setting, the ranges of intra-family Kendall Tau coefficients are smaller than the MRQA setting. Methods in the uncertainty family have especially strong correlations with each other and much weaker with methods outside of the family. For Hdivergence based methods, intra-family correlations are not’t as strong as for the uncertainty family;
11https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/ kendell.htm
in fact, the Kendall Taus between DAL-E/KNN and DAL-T/KNN appear to be slightly within the H-divergence intra-family range.
Furthermore, intra-family ranges are quite large for all families in the MRQA setting. For each method, there is at least one other method from a different family with which it had a higher Kendall Tau coefficient than the least similar methods of its own family.
F RELATING DOMAIN DISTANCES TO PERFORMANCE
We investigated why certain methods work better than others. One hypothesis is that there exists a relationship between between target-source domain distances and method performance. We estimated the distance between two domains by computing the Wasserstein distance between random samples of 3k example embeddings from each domain. We experimented with two kinds of example embeddings: 1. A task agnostic embedding computed by the sentence transformer used in the KNN method, and 2. A task specific embedding computed by a model trained with the source domain used in the DAL∗ method. Given that there are k − 1 source domains for each target domain, we tried aggregating domain distances over its mean, minimum, maximum, and variance to see if Wasserstein domain distances could be indicative of relative performance across all methods.
Figure 4, Figure 5, Figure 6, and Figure 7 each show, for a subset of methods, the relationship between each domain distance aggregation and the final performance gap between the best performing method. Unfortunately, we found no consistent relationship for both MRQA and the sentiment classification tasks. We believe that this result arose either because our estimated domain distances were not reliable measures of domain relevance, or because the aggregated domain distances are not independently sufficient to discern relative performance differences across methods.
Figure 5: Minimum Wasserstein domain distance vs method performance.
Figure 6: Maximum Wasserstein domain distance vs method performance. | 1. What is the focus of the paper regarding active learning in multi-domain text classification?
2. What are the strengths of the proposed approach, particularly in its comparison to other active learning methods?
3. What are the weaknesses of the paper, especially in terms of analysis and claims?
4. How does the reviewer assess the significance and impact of the work in the context of NLP domain shift settings?
5. Are there any suggestions or recommendations for improving the analysis and addressing the challenges mentioned by the reviewer? | Summary Of The Paper
Review | Summary Of The Paper
This paper surveys a broad range of techniques for active learning in the multi-domain setting applied to text - specifically, given a small labeled dataset from a target domain and a large amount of unlabeled data from a collection of source domains, how should examples be picked from the source domains to for labeling to maximise performance on the target domain. The main findings of the work are that selecting examples based on H-divergence measures perform much better than most active learning approaches. In-particular, they propose a method termed DAL-E where examples are selected from the source domain based on similarity to misclassified target domain examples, and show that it outperforms standard “disciminative active learning” on most of their experiments. From analysis, it is revealed that selecting from diverse domains helps, and their proposed DAL-E can help with avoiding bad examples to be selected.
Review
Overall, I like this work since it serves as a guide for practitioners faced with similar settings, and gives good guidelines on what works in these challenging domain shift settings. The experiments are extensive and cover many of the domain shift settings in NLP. However, I think some of the analysis could be improved.
Weaknesses:
I like the analysis on measuring how different methods rank examples. However, I would highly recommend performing some kind of normalization - e.g. to compare method a1 from family A and method b1 from family B, kendall’s tau score should be normalized w.r.t the score between methods from the same family.
I would like to challenge some of the claims made in Section 6.2 of the paper. First, while it does seem like a diversity of examples helps, the boost is very minimal and it is certainly not true in general. Indeed, one can construct a domain distribution where getting data from several domains could hurt performance, if the mapping function (or the underlying P(y | x)) changes between domains.
Based on these weaknesses, I think the paper is currently borderline. However, if these are addressed i’m happy to adjust my scores. |
ICLR | Title
Active Learning over Multiple Domains in Natural Language Tasks
Abstract
Studies of active learning traditionally assume the target and source data stem from a single domain. However, in realistic applications, practitioners often require active learning with multiple sources of out-of-distribution data, where it is unclear a priori which data sources will help or hurt the target domain. We survey a wide variety of techniques in active learning (AL), domain shift detection (DS), and multi-domain sampling to examine this challenging setting for question answering and sentiment analysis. We ask (1) what family of methods are effective for this task? And, (2) what properties of selected examples and domains achieve strong results? Among 18 acquisition functions from 4 families of methods, we findHDivergence methods, and particularly our proposed variant DAL-E, yield effective results, averaging 2-3% improvements over the random baseline. We also show the importance of a diverse allocation of domains, as well as room-for-improvement of existing methods on both domain and example selection. Our findings yield the first comprehensive analysis of both existing and novel methods for practitioners faced with multi-domain active learning for natural language tasks.
1 INTRODUCTION
New natural language problems, outside the watershed of core NLP, are often strictly limited by a dearth of labeled data. While unlabeled data is frequently available, it is not always from the same source as the target distribution. This is particularly prevalent for tasks characterized by (i) significant distribution shift over time, (ii) personalization for user subgroups, or (iii) different collection mediums (see examples in Section A).
A widely-used solution to this problem is to bootstrap a larger training set using active learning (AL): a method to decide which unlabeled training examples should be labeled on a fixed annotation budget (Cohn et al., 1996; Settles, 2012). However, most active learning literature in NLP assumes the unlabeled source data is drawn from the same distribution as the target data (Dor et al., 2020). This simplifying assumption avoids the frequent challenges faced by practitioners in multi-domain active learning. In this realistic setting, there are multiple sources of data (i.e. domains) to consider. In this case, it’s unclear whether to optimize for homogeneity or heterogeneity of selected examples. Secondly, is it more effective to allocate an example budget per domain, or treat examples as a single unlabeled pool? Where active learning baselines traditionally select examples the model is least confident on (Settles, 2009), in this setting it could lead to distracting examples from very dissimilar distributions.
In this work we empirically examine four separate families of methods (uncertainty-based, HDivergence, reverse classification accuracy, and semantic similarity detection) over several question answering and sentiment analysis datasets, following (Lowell et al., 2019; Elsahar & Gallé, 2019b), to provide actionable insights to practitioners facing this challenging variant of active learning for natural language. We address the following questions:
1. What family of methods are effective for multi-domain active learning? 2. What properties of the example and domain selection yield strong results?
While previous work has investigated similar settings (Saha et al., 2011; Liu et al., 2015; Zhao et al., 2021; Kirsch et al., 2021) we contribute, to our knowledge, the first rigorous formalization and broad survey of methods within NLP. We find that many families of techniques for active learning and
domain shift detection fail to reliably beat random baselines in this challenging variant of active learning, but certain H-Divergence methods are consistently strong. Our analysis identifies stark dissimilarities of these methods’ example selection, and suggests domain diversity is an important factor in achieving strong results. These results may serve as a guide to practitioners facing this problem, suggesting particular methods that are generally effective and properties of strategies that increase performance.
2 RELATED WORK
Active Learning in NLP Lowell et al. (2019) shows how inconsistent active learning methods are in NLP, even under regular conditions. However, Dor et al. (2020); Siddhant & Lipton (2018) survey active learning methods in NLP and find notable gains over random baselines. Kouw & Loog (2019) survey domain adaptation without target labels, similar to our setting, but for non-language tasks. We reference more active learning techniques in Section 4.
Domain Shift Detection Elsahar & Gallé (2019b) attempt to predict accuracy drops due to domain shifts and Rabanser et al. (2018) surveys different domain shift detection methods. Arora et al. (2021) examine calibration and density estimation for textual OOD detection.
Active Learning under Distribution Shift A few previous works investigated active learning under distribution shifts, though mainly in image classification, with single source and target domains. Kirsch et al. (2021) finds that BALD, which is often considered the state of the art for unshifted domain settings, can get stuck on irrelevant source domain or junk data. Zhao et al. (2021) investigates label shift, proposing a combination of predicted class balanced subsampling and importance weighting. Saha et al. (2011), whose approach corrects joint distribution shift, relies on the covariate shift assumption. However, in practical settings, there may be general distributional shifts where neither the covariate shift nor label shift assumptions hold.
Transfer Learning from Multiple Domains Attempts to better understand how to handle shifted domains for better generalization or target performance has motivated work in question answering (Talmor & Berant, 2019; Fisch et al., 2019; Longpre et al., 2019; Kamath et al., 2020) and classification tasks (Ruder & Plank, 2018; Sheoran et al., 2020). Ruder & Plank (2017) show the benefits of both data similarity and diversity in transfer learning. Rücklé et al. (2020) find that sampling from a widevariety of source domains (data scale) outperforms sampling similar domains in question answering. He et al. (2021) investigate a version of multi-domain active learning where models are trained and evaluated on examples from all domains, focusing on robustness across domains.
3 MULTI-DOMAIN ACTIVE LEARNING
Suppose we have multiple domains D1, D2, ..., Dk.1 Let one of the k domains be the target set DT , and let the other k − 1 domains comprise the source set DS =
⋃ i 6=T Di.
Given:
• Target: Small samples of labeled data points (x, y) from the target domain. DtrainT , D dev T , D test T ∼ DT .2 • Source: A large sample of unlabeled points (x) from the source domains. DS =
⋃ i6=T Di
Task:
1. Choose n samples from DS to label. DchosenS ⊂ DS , |DchosenS | = n, selected by argmaxx∈DS Af (x) where Af is an acquisition function: a policy to select unlabeled examples from DS for labeling. 2. Train a model M on Dfinal−train, validating on DdevT . Dfinal−train = DtrainT ∪DchosenS
3. Evaluate M on DtestT , giving score s. 1We define a domain as a dataset collected independently of the others. 2|DtrainT | = 2000 to simulate a small but reasonable quantity of labeled, in-domain training data for active
learning scenarios.
For Step 1, the practitioner chooses n samples with the highest scores according to their acquisition function Af . M is fine-tuned on these n samples, then evaluated on DtestT to demonstrate Af ’s ability to choose relevant out-of-distribution training examples.
4 METHODS
We identify four families of methods relevant to active learning over multiple shifted domains. Uncertainty methods are common in standard active learning for measuring example uncertainty or familiarity to a model; H-Divergence techniques train classifiers for domain shift detection; Semantic Similarity Detection finds data points similar to points in the target domain; and Reverse Classification Accuracy approximates the benefit of training on a dataset. A limitation of our work is we do not cover all method families, such as domain adaptation, just those we consider most applicable. We derive ∼18 active learning variants, comprising the most prevalent and effective from prior work, and novel extensions/variants of existing paradigms for the multi-domain active learning setting (see KNN, R̃CA and DAL-E).
Furthermore, we split the families into two acquisition strategies: Single Pool Strategy and Domain Budget Allocation. Single Pool Strategy, comprising the first three families of methods, treats all examples as coming from one single unlabeled pool. Domain Budget Allocation, consisting of Reverse Classification Accuracy methods, simply allocate an example budget for each domain.
We enumerate acquisition methods Af below. Each method produces a full ranking of examples in the source set DS . To rank examples, most acquisition methods train an acquisition model, MA, using the same model architecture as M . MA is trained on all samples from DtrainT , except for DAL and KNN, which split DtrainT into two equal segments, one for training MA and one for an internal model. Some methods have both ascending and descending orders of these rankings (denoted by ↑ and ↓ respectively, in the method abbreviations), to test whether similar or distant examples are preferred in a multi-domain setting.
Certain methods use vector representations of candidate examples. We benchmark with both taskagnostic and task-specific encoders. The task-agnostic embeddings are taken from the last layer’s CLS token in Reimers & Gurevych (2019)’s sentence encoder (Appendix for details). The task-specific embeddings are taken from the last layer’s CLS token in the trained model MA.
The motivation of the task-specific variant is that each example’s representation will capture taskrelevant differences between examples while ignoring irrelevant differences.3 The versions of DAL and KNN methods that use task-specific vectors are denoted with “∗” in their abbreviation. Otherwise, they use task-agnostic vectors.
4.1 UNCERTAINTY METHODS
These methods measure the uncertainty of a trained model on a new example. Uncertainty can reflect either aleatoric uncertainty, due to ambiguity inherent in the example, or epistemic uncertainty, due to limitations of the model (Kendall & Gal, 2017). For the following methods, let Y be the set of all possible labels produced from the model M(x) and ly be the logit value for y ∈ Y .
Confidence (CONF) A model’s confidence P (y|x) in its prediction y estimates the difficulty or unfamiliarity of an example (Guo et al., 2017; Elsahar & Gallé, 2019a).
Entropy (ENTR) Entropy applies Shannon entropy (Shannon, 1948) to the full distribution of class probabilities for each example, formalized as AENTR.
ACONF(x,MA) = −max(P (y|x)) AENTR(x,MA) = − |Y |∑ i=1 P (yi|x) · logP (yi|x)
Energy-based Out-of-Distribution Detection (ENG) Liu et al. (2020) use an energy-based score to distinguish between in- and out-distribution examples. They demonstrate this method is less susceptible to overconfidence issues of softmax approaches.
3For instance, consider in one domain every example is prefixed with “Text:” while the other is not — telling the difference is trivial, but the examples could be near-identical with respect to the task.
Bayesian Active Learning by Disagreement (BALD) Gal & Ghahramani (2016) introduces estimating uncertainty by measuring prediction disagreement over multiple inference passes, each with a distinct dropout mask. BALD isolates epistemic uncertainty, as the model would theoretically produce stable predictions over inference passes given sufficient capacity. We conduct T = 20 forward passes on x. ŷt = argmaxiP (yi|x)t, representing the predicted class on the t-th model pass on x. Following (Lowell et al., 2019), ties are broken by taking the mean label entropy over all T runs.
AENG(x,MA) = − log ∑ y∈Y ely ABALD(x,MA) = 1− count(modet∈T (ŷt)) T
4.2 H-DIVERGENCE METHODS
Ben-David et al. (2006; 2010) formalize the divergence between two domains as theH-Divergence, which they approximate as the difficulty for a discriminator to differentiate between the two.4 Discriminative Active Learning (DAL) applies this concept to the active learning setting (Gissin & Shalev-Shwartz, 2019).
We explore variants of DAL, using an XGBoost decision tree (Chen & Guestrin, 2016) as the discriminator model g.5 For the following methods, let Dtrain−BT be the 1k examples from D train T that were not used to train MA. Let E be an encoder function, which can be task-specific or agnostic as described above. We use samples both fromDtrain−BT andDS to train the discriminator. We assign samples origin labels l, which depend on the DAL variant. Samples from DS with discriminator predictions closest to 1 are selected for labeling. The acquisition scoring function for each DAL method and training set definition, respectively, are:
ADAL(x, g, E) = g(E(x)) {(E(x), l) | x ∈ Dtrain−BT ∪DS}
Discriminative Active Learning — Target (DAL-T) DAL-T trains a discriminator g to distinguish between target examples in Dtrain−BT and out-of-distribution examples from DS . For DAL-T, l = 1Dtrain−BT (x).
Discriminative Active Learning — Error (DAL-E) DAL-E is a novel variant of DAL. DALE’s approach is to find examples that are similar to those in the target domain that MA misclassified. We partition Dtrain−BT further into erroneous samples D err T and correct samples D corr T , where Dtrain−BT = D err T ∪DcorrT . For DAL-E, l = 1DerrT (x).
4.3 REVERSE CLASSIFICATION ACCURACY
RCA Reverse Classification Accuracy (RCA) estimates how effective source set Di,i∈S is as a training data for target test set DT (Fan & Davidson, 2006; Elsahar & Gallé, 2019b). Without gold labels for Di we compute soft labels instead, using the BERT-Base MA trained on the small labeled set DtrainT . We then train a child model Mi on Di using these soft labels, and evaluate the child model on DdevT . RCA chooses examples randomly from whichever domain i produced the highest score si.
ARCA = 1D(arg maxi∈S si)(x) R̃CA : τi = si
sT − si , |Dchoseni | = τi∑ j sj
RCA-Smoothed (R̃CA) Standard RCA only selects examples from one domain Di. We develop a novel variant which samples from multiple domains, proportional to their relative performance on the target domain DdevT . RCA-smoothed (R̃CA) selects |Dchoseni | examples from source domain i, based on the relative difference between the performance si (of child model Mi trained on domain i with pseudo-labels from MA) on the target domain, and the performance sT of a model trained directly on the target domain DdevT . Since these strategies directly estimates model performance on the target domain resulting from training on each source domain, RCA and R̃CA are strong Domain Budget Allocation candidates.
4.4 NEAREST NEIGHBOUR / SEMANTIC SIMILARITY DETECTION (KNN)
Nearest neighbour methods (KNN) are used to find examples that are semantically similar. Using sentence encoders we can search the source set DS to select the top k nearest examples by cosine similarity to the target set. We represent the target set as the mean embedding of DtrainT . For question answering, where an example contains two sentences (the query and context), we refer to KNN-Q where we only encode the query text, KNN-C where we only encode the context text, or KNN-QC where we encode both concatenated together. The acquisition scoring function per example, uses either a task-specific or task-agnostic encoder E:
AKNN(x,E) = CosSim(E(x),Mean(E(DtrainT ))
5 EXPERIMENTS
Experiments are conducted on two common NLP tasks: question answering (QA) and sentiment analysis (SA), each with several available domains.
Question Answering We employ 6 diverse QA datasets from the MRQA 2019 workshop (Fisch et al., 2019), shown in Table 1 (left).6 We sample 60k examples from each dataset for training, 5k for validation, and 5k for testing. Questions and contexts are collected with varying procedures and sources, representing a wide diversity of datasets.
Sentiment Analysis For the sentiment analysis classification task, we follow (Blitzer et al., 2007) and (Ruder & Plank, 2018) by randomly selecting 6 Amazon multi-domain review datasets, as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 7 Altogether, these datasets exhibit wide diversity based on review length and topic (see Table 1). We normalize all datasets to have 5 sentiment classes: very negative, negative, neutral, positive, and very positive. We sample 50k examples for training, 5k for validation, and 5k for testing.
Experimental Setup To evaluate methods for the multi-domain active learning task, we conduct the experiment described in Section 3 for each acquisition method, rotating each domain as the target set. Model M , a BERT-Base model (Devlin et al., 2019), is chosen via hyperparameter grid search over learning rate, number of epochs, and gradient accumulation. The large volume of experiments entailed by this search space limits our capacity to benchmark performance variability due to isolated factors (the acquisition method, the target domain, or fine-tuning final models). However, our hyperparameter search closely mimics the process of an ML practitioner looking to select a best method and model, so we believe our experiment design captures a fair comparison among methods. See Algorithm 1 in Appendix Section B for full details.
4The approximation is also referred to as Proxy A-Distance (PAD) from (Elsahar & Gallé, 2019b) 5Hyperparameter choices and training procedures are detailed in the Appendix. 6The workshop pre-processed all datasets into a similar format, for fully answerable, span-extraction QA: https://github.com/mrqa/MRQA-Shared-Task-2019. 7https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
6.1 COMPARING ACQUISITION METHODS
Results in Figure 1 show the experiments described in Section 5: benchmarking each acquisition method for multi-domain active learning. We observe for both question answering (QA) and sentiment analysis (SA), most methods manage to outperform the no-extra-labelled data baseline (0% at the y-axis) and very narrowly outperform the random selection baseline (red line). Consistent with prior work (Lowell et al., 2019), active learning strategies in NLP have brittle and inconsistent improvements over random selection. Our main empirical findings, described in this section, include:
• H-Divergence, and particularly DAL-E variants, consistently outperform baselines and other families of methods.
• The ordering of examples in Uncertainty methods depend significantly on the diversity in source domains. BALD variants perform best among available options.
• Task-agnostic representations, used in KNN or DAL variants, provide consistently strong results on average, but task-specific representations significantly benefit certain target sets.
• Different families of methods rely on orthogonal notions of relevance in producing their example rankings.
H-Divergence methods categorically achieved the highest and most reliable scores, both as a family and individual methods, represented in the top 3 individual methods 11 / 18 times for QA, and 20 / 24 times for SA. For QA, BALD↑ and DAL-E∗ had the best mean and median scores respectively, and for SA DAL-E achieved both the best mean and median scores. Among these methods, our proposed DAL-E variants routinely outperform DAL-T variants by a small margin on average, with equivalent training and tuning procedures. We believe this is because DAL-E captures both notions of domain similarity and uncertainty. By design it prioritizes examples that are similar to in-domain samples, but also avoids those which are uninformative, because the model already performs well on them.
Among Uncertainty methods, for SA methods which select for higher uncertainty vastly outperformed those which selected for low uncertainty. The opposite is true for QA. This suggests the
diversity of QA datasets contain more extreme (harmful) domain shift than the (mostly Amazon-based) SA datasets.8 In both settings, the right ordering of examples with BALD (epistemic uncertainty) achieves the best results in this family of methods, over the others, which rely on total uncertainty.
Among Reverse Classification Accuracy methods, our R̃CA variant also noticeably outperforms standard RCA and most other methods, aside from DAL and BALD. Combining R̃CA with an example ranking method is a promising direction for future work, given the performance it achieves selecting examples randomly as a Domain Budget Allocation strategy.
Lastly, the Semantic Similarity Detection set of methods only rarely or narrowly exceed random selection. Intuitively, task-agnostic representations (KNN) outperform KNN∗, given the task-agnostic sentence encoder was optimized for cosine similarity.
Embedding Ablations To see the effects of embedding space on KNN and DAL, we used both a task-specific and task-agnostic embedding space. While a task-specific embedding space reduces the examples to features relevant for the task, a task-agnostic embedding space produces generic notions of similarity, unbiased by the task model.
According to Figure 1, KNN outperforms KNN∗. In the QA setting, KNN∗’s median is below the random baseline’s. In both plots, KNN∗’s whiskers extend below 0, indicating that in some cases the method actually chooses source examples that are harmful to target domain performance.
For DAL methods, task-agnostic and task-specific embeddings demonstrated mostly similar median performances. Notably, the boxes and whiskers are typically longer for task-specific methods than task-agnostic methods. This variability indicates certain target datasets may benefit significantly from task-specific embeddings, though task-agnostic embeddings achieve more consistent results.
Comparing Example Rankings For each setting, we quantify how similar acquisition methods rank examples from DS . In Figure 2, for each pair of methods, we calculate the Kendall’s Tau coefficient between the source example rankings chosen for a target domain, then average this coefficient over the target domains. Kendall’s Tau gives a scores [−1, 1], with -1 meaning perfect anti-correlation, 0 meaning no correlation, and 1 meaning perfect correlation between the rankings. Methods from different families show close to no relationship, even if they achieve similar performances, suggesting each family relies on orthogonal notions of similarity to rank example relevance. This suggests there is potential for combining methods from different families for this task in future work.
In Sentiment tasks, all uncertainty methods had highly correlated examples. In QA, ENTR had little correlation with any method. This is likely due to the significantly larger output space for QA models. Compared to only 5 label classes in SA, question answering models distribute their start and end confidences over sequences of up to 512, where there can be multiple valid answer candidates. Embedding space also largely influences the examples that methods chose. DAL methods had higher correlations with each other when they share the same embedding space; i.e. DAL-E’s ranking has a higher correlation with DAL-T than with DAL-E∗.
6.2 PROPERTIES OF OPTIMAL EXAMPLE SELECTION
We examine three properties of optimally selected examples: (i) whether selecting from many diverse or one single domain leads to better performance, (ii) whether the selection of a domain or the individual examples matters more to performance, and (iii) whether selection strategies can benefit from source domain information rather than treating samples as drawn from a single pool? Our findings regarding properties of optimal selection, as described in this section, include:
• Selecting a diversity of domains usually outperforms selecting examples from a single domain.
• Acquisition functions such as DAL-E∗ do rely on example selection, mainly to avoid the possibility of large negative outcomes.
• Domain Budget Allocation during selection may improve performance. Surprisingly, even random selection from an “optimal” balance of domains beats our best performing acquisition methods most of the time.
8Accordingly, we attempt to derive a relationship between domain distance and method performance in Appendix F, but find intuitive calculations of domain distance uninterpretable.
Are Many Diverse or One Single Domain Preferable? To answer this question we conduct a full search over all combinations of source datasets. For each target set, we fix 2k in-domain data points and sample all combinations of other source sets in 2k increments, such that altogether there are 10k training data points. For each combination of source sets, we conduct a simple grid search, randomly sampling the source set examples each time, and select the best model, mimicking standard practice among practitioners.
The result is a comprehensive search of all combinations of source sets (in 2k increments) up to 10k training points, so we can rank all combinations of domains per target, by performance. Tables 2a and 2b show the optimal selections, even as discrete as 2k increments, typically select at least two or more domains to achieve the best performance. However, 1 of 6 targets for QA, or 2 of 8 for the SA tasks achieve better results selecting all examples from a single domain, suggesting this is a strong baseline, if the right source domain is isolated. We also report the mean score of all permutations to demonstrate the importance of selecting the right set of domains over a random combination.
Domains or Examples? Which is more important, to select the right domains or the right examples within some domain? From the above optimal search experiment we see selecting the right combination of domains regularly leads to strong improvements over a random combination of domains. Whether example selection is more important than domain selection may vary depending on the example variety within each domain. We narrow our focus to how much example selection plays a role for one of the stronger acquisition functions: DAL-E∗. We fix the effect of domain selection (the number of examples from each domain) but vary which examples are specifically selected. Using DAL-E∗’s distribution of domains, we compare the mean performance of models trained on it’s highest ranked examples against a random set of examples sampled from those same domains. We find a +0.46 ± 0.25% improvement for QA, and +0.12 ± 0.19% for SA. We also compare model performances trained on random selection against the lowest ranked examples by DAL-E∗. Interestingly, we see a -1.64 ± 0.37% performance decrease for QA, and -1.46 ± 0.56% decrease for sentiment tasks. These results suggest that example selection is an important factor beyond domain selection, especially for avoiding bad example selections.
Single Pool or Domain Budget Allocation Does using information about examples’ domains during selection lead to better results than treating all examples as coming from a single unlabeled pool? Originally, we hypothesized Single Pool Strategy methods would perform better on smaller budget sizes as they add the most informative data points regardless of domain. On the other hand, we thought that if the budget size is large, Domain Budget Allocation would perform best, as they choose source domains closest to the target domain. Based on Tables 1b and 1a, we were not able to draw conclusions about this hypothesis, as each sample size n = {8000, 18000, 28000} produced roughly similar winning methods. Future work should include a wider range of budget sizes with larger changes in method performance between sizes.
In our main set of experiments, the RCA acquisition functions follow the Domain Budget Allocation strategy, while all other acquisition functions follow the Single Pool strategies. Based on median performance, R̃CA outperformed all other methods (we’re including BALD here due to inconsistency in performance between QA and SA) except for those in theH-Divergence family. This suggests that using domain information during selection can lead to performance gains.
The Optimal Domain Search experiments, shown in Tables 2a and 2b, further suggest that allocating a budget from each domain can improve performance. For 8 out of our 14 experiments, selecting random samples according to the optimal domain distribution outperform any active learning strategy. While the optimal domain distributions were not computed a priori in our experiments, this result shows the potential for Domain Budget Allocation strategies. Future work could reasonably improve our results by developing an acquisition function that better predicts the optimal domain distributions than R̃CA, or to even have greater performance gains by budgeting each domain, then applying an active learning strategy (e.g. DAL-E) within each budget.
7 CONCLUSION
We examine a challenging variant of active learning where target data is scarce, and multiple shifted domains operate as the source set of unlabeled data. For practitioners facing multi-domain active learning, we benchmark 18 acquisition functions, demonstrating theH-Divergence family of methods and our proposed variant DAL-E achieve the best results. Our analysis shows the importance of example selection in existing methods, and also the surprising potential of domain budget allocation strategies. Combining families of methods, or trying domain adaptation techniques on top of selected example sets, offer promising directions for future work.
A MULTI-DOMAIN ACTIVE LEARNING TASK
In this section, we would enumerate real-world settings in which a practitioner would be interested in multi-domain active learning methods. We expect this active learning variant to be applicable to cold starts, rare classes, personalization, and settings where the modelers are constrained by privacy considerations, or a lack of labelers with domain expertise.
• In the cold start scenario, for a new NLP problem, there is often little to no target data available yet (labeled or unlabelled), but there are related sources of unlabelled data to try. Perhaps an engineer has collected small amounts of training data from an internal population. Because the data size is small, the engineer is considering out-of-domain samples, collected from user studies, repurposed from other projects, scraped from the web, etc..
• In the rare class scenario, take an example of a new platform/forum/social media company classifying hate speech against a certain minority group. Perhaps the prevalence of positive, in-domain samples on the social media platform is small, so an engineer uses out-domain samples from books, other social media platforms, or from combing the internet.
• In a personalization setting, like spam filtering or auto-completion on a keyboard, each user may only have a couple hundred of their own samples, but out-domain samples from other users may be available in greater quantities.
• In the privacy constrained setting, a company may collect data from internal users, user studies, and beta testers; however, a commitment to user privacy may incentivize the company to keep the amount of labeled data from the target user population low.
• Lastly, labeling in-domain data may require certain domain knowledge, which would lead to increased expenses and difficulty in finding annotators. As an example, take a text classification problem in a rare language. It may be easy to produce out-domain samples by labeling English text and machine translating it to the rare language, whereas generating in-domain labeled data would require annotators who are fluent in the rare language.
In each of these settings, target distribution data may not be amply available, but semi-similar unlabelled domains often are. This rules out many domain adaptation methods that rely heavily on unlabelled target data.
We were able to simulate the base conditions of this problem with sentiment analysis and question answering datasets, since they are rich in domain diversity. We believe these datasets are reasonable proxies to represent the base problem, and yield general-enough insights for a practitioner starting on this problem.
B REPRODUCIBILITY
B.1 DATASETS AND MODEL TRAINING
We choose question answering and sentiment analysis tasks as they are core NLP tasks, somewhat representative of many classification and information-seeking problems. Multi-domain active learning is not limited to any subset of NLP tasks, so we believe these datasets are a reasonable proxie for the problem.
For question answering, the MRQA shared task (Fisch et al., 2019) includes SQuAD (Rajpurkar et al., 2016), NewsQA (Trischler et al., 2016), TriviaQA (Joshi et al., 2017), SearchQA (Dunn et al., 2017), HotpotQA (Yang et al., 2018), and Natural Questions (Kwiatkowski et al., 2019).
For the sentiment analysis classification task, we use Amazon datasets following (Blitzer et al., 2007) and (Ruder & Plank, 2018), as well as Yelp reviews (Asghar, 2016) and IMDB movie reviews datasets (Maas et al., 2011). 9 Both question answering and sentiment analysis datasets are described in Table 1.
For reproducibility, we share our hyper-parameter selection in Table 3. Hyper-parameters are taken from Longpre et al. (2019) for training all Question Answering (QA) models since their parameters
9https://jmcauley.ucsd.edu/data/amazon/, https://www.yelp.com/dataset, https://ai.stanford.edu/˜amaas/data/sentiment/.
are tuned for the same datasets in the MRQA Shared Task. We found these choices to provide stable and strong results across all datasets. For sentiment analysis, we initially experimented on a small portion of the datasets to arrive at a strong set of base hyper-parameters to tune from.
Our BERT question answering modules build upon the standard PyTorch (Paszke et al., 2019) implementations from HuggingFace, and are trained on one NVIDIA Tesla V100 GPU.10.
B.2 EXPERIMENTAL DESIGN
For more detail regarding the experimental design we include Algorith 1, using notation described in the multi-domain active learning task definition.
Algorithm 1 EXPERIMENTAL DESIGN 1: for each Acquisition Function Af do 2: for each Target set DT ∼ D do 3: DtrainT , D dev T , D test T ∼ DT
4: DS := {x ∈ D | x /∈ DT } 5: MA ← TRAIN(DtrainT , DdevT ) 6: Dchosen ← [Rankx∈DSAf (x,MA)][: n] 7: Dfinal−train = DtrainT ∪Dchosen 8: M ← GRIDSEARCH(Dfinal−train, DdevT ) 9: (Af , DT ) = s Af T ←M(D test T )
10: end for 11: end for 12: return Scores Dictionary (Af , DT )→ s Af T
C ACQUISITION FUNCTIONS
C.1 TASK AGNOSTIC EMBEDDINGS
To compute the semantic similarity between two examples, we computed the example embeddings using the pre-trained model from a sentence-transformer (Reimers & Gurevych, 2019). We used the
10https://github.com/huggingface/transformers
RoBERTa large model, which has 24 layers, 1024 hidden layers, 16 heads, 355M parameters, and fine tuning on the SNLI (Bowman et al., 2015), MultiNLI (Williams et al., 2018), and STSBenchmark (Cer et al., 2017) datasets. Its training procedure is documented in https://www.sbert.net/ examples/training/sts/README.html.
C.2 BAYESIAN ACTIVE LEARNING BY DISAGREEMENT (BALD)
We note that Siddant and Lipton’s presentation of BALD is more closely related to the Variation Ratios acquisition function described in Gal et al. (2017) than the description of dropout as a Bayesian approximation given in Gal & Ghahramani (2016). In particular, Gal et al. (2017) found that Variation Ratios performed on par or better than Houlsby’s BALD on MNIST but was less suitable for ISIC2016.
C.3 DISCRIMINATIVE ACTIVE LEARNING MODEL (DAL) TRAINING
DAL’s training set is created using the methods detailed in Section 4.2. The training set is then partitioned into five equally sized folds. In order to predict on data that is not used to train the discriminator, we use 5-fold cross validation. The model is trained on four folds, balancing the positive and negative classes using sample weights. The classifier then predicts on the single heldout fold. This process is repeated five times so that each example is in the held out fold exactly once. Custom model parameters are shown in Table 4; model parameters not shown in the table are the default XGBClassifier parameters in xgboost 1.0.2. The motivations for choice in model and architecture are the small amount of target domain examples requiring a simple model to prevent overfitting and the ability of decision trees to capture collective interactions between features.
D FULL METHOD PERFORMANCES
We provide a full breakdown of final method performances in Tables 5 and 6.
E KENDALL’S TAU
E.1 DEFINITION
Kendall’s Tau is a statistic that measures the rank correlation between two quantities. Let X and Y be random variables with (x1, y1), (x2, y2), ..., (xn, yn) as observations drawn from the joint distribution. Given a pair (xi, yi) and (xj , yj), where i 6= j, we have: yj−yi xj−xi > 0 : pair is concordant
yj−yi xj−xi < 0 : pair is discordant
yj−yi xj−xi = 0 : pair is a tie
Let nc be the number of concordant pairs and nd the number of discordant pairs. Let ties add 0.5 to the concordant and discordant pair counts each. Then, Kendall’s Tau is computed as:11
τ = nc−ndnc+nd
E.2 INTER-FAMILY COMPARISON
Here, we extend on our comparison of example rankings by presenting plots of Kendall Tau scores normalized by intra-family scores in 3. For the sentiment setting, the ranges of intra-family Kendall Tau coefficients are smaller than the MRQA setting. Methods in the uncertainty family have especially strong correlations with each other and much weaker with methods outside of the family. For Hdivergence based methods, intra-family correlations are not’t as strong as for the uncertainty family;
11https://www.itl.nist.gov/div898/software/dataplot/refman2/auxillar/ kendell.htm
in fact, the Kendall Taus between DAL-E/KNN and DAL-T/KNN appear to be slightly within the H-divergence intra-family range.
Furthermore, intra-family ranges are quite large for all families in the MRQA setting. For each method, there is at least one other method from a different family with which it had a higher Kendall Tau coefficient than the least similar methods of its own family.
F RELATING DOMAIN DISTANCES TO PERFORMANCE
We investigated why certain methods work better than others. One hypothesis is that there exists a relationship between between target-source domain distances and method performance. We estimated the distance between two domains by computing the Wasserstein distance between random samples of 3k example embeddings from each domain. We experimented with two kinds of example embeddings: 1. A task agnostic embedding computed by the sentence transformer used in the KNN method, and 2. A task specific embedding computed by a model trained with the source domain used in the DAL∗ method. Given that there are k − 1 source domains for each target domain, we tried aggregating domain distances over its mean, minimum, maximum, and variance to see if Wasserstein domain distances could be indicative of relative performance across all methods.
Figure 4, Figure 5, Figure 6, and Figure 7 each show, for a subset of methods, the relationship between each domain distance aggregation and the final performance gap between the best performing method. Unfortunately, we found no consistent relationship for both MRQA and the sentiment classification tasks. We believe that this result arose either because our estimated domain distances were not reliable measures of domain relevance, or because the aggregated domain distances are not independently sufficient to discern relative performance differences across methods.
Figure 5: Minimum Wasserstein domain distance vs method performance.
Figure 6: Maximum Wasserstein domain distance vs method performance. | 1. What is the main contribution of the paper regarding active learning and domain shift?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. What are the limitations and potential improvements of the experimental setting and methods used in the study?
5. Are there any questions or concerns regarding the problem definition, goal, and applications of the paper? | Summary Of The Paper
Review | Summary Of The Paper
The authors investigate the efficacy of several active learning-related techniques in classification under domain shift with multiple source domains. They include: uncertainty methods, H-divergence methods, reverse-classification methods, and nearest neighbor methods.
They construct 18 methods with various combinations and settings and carry out experiments in two text classification datasets: Question answering and sentiment analysis.
They report several findings, among them, the most interesting one to me is that uncertainty in various models manifests itself in various ways and candidate data points are not necessarily the same across models.
Review
Reasons to accept:
The study has a very ambitious goal. In the framework that the problem is defined, the authors, in my opinion, do a good job of defining sub-problems and validating the claims.
The paper is framed and written well. The study inherently has many parts, consequently this makes following up the content a bit difficult. But it appears that the authors have recognized this challenge and structured the paper to minimize this issue for the reader, e.g., summarizing the findings at the beginning of each section.
The paper is very detailed. Namely, the experiments are thorough, the discussions are extensive, and the references are comprehensive.
Reasons to reject:
My first concern is about the topic itself. As a person who has experience in text classification. Has done research in Active Learning and also in Domain Adaptation. I am not clear in what real world situations one might need to do active learning with multiple-source domain adaptation. The authors cite some papers as the studies in this area. Some I already knew, and some I specifically checked. But to my knowledge none of them actually did research in this exact topic. Please note that the problems of label shift or dataset shift are different from what the authors are proposing here. At the beginning of the introduction section the authors provide a very short description. But I am still not clear about this. On the other hand, if this problem applies to only some very specific settings, then why not specifically write the paper for those settings. Because, currently the paper claims that the proposed problem is a general and common challenge.
Another issue that further adds to the confusion is the data split used in the experiments. The authors assume each target domain already has several thousand labeled documents. So I can again ask, under what text classification scenarios there are several thousand labeled documents available but there is no unlabeled document available in that domain, so that we may be forced to use unlabeled documents from several other sources.
My second concern is regarding some of the methods used in the experiments. I understand that preparing such a paper is difficult, but still to me it was not initially easy to relate the KNN methods and active learning. Or H-divergence methods and active learning. Of course, KNN relates to representativeness metric and H-divergence relates to data sampling. But the way that the authors frame these methods implies that these are commonly used in active learning methods. I disagree with the authors, the connection is not initially obvious at all. Another method used in the paper, which is still unclear to me, is to rank data points in an ascending order of uncertainty. It is still not clear to me why one should select the points with most confident label predictions. this contradicts the active learning purpose.
My third concern is regarding the experimental setting: 1) The correct way of comparing active learning methods is to compare the learning curves. The authors only compare the final F1 measure. 2) None of the methods used in the paper are actual domain adaptation models. The authors simply aggregate the source and target data points to create a training set. This is not Domain Adaptation. 3) In H-divergence methods, authors use a decision tree-based classifier as discriminator. On the other hand the vectors are distributed representations, and the features are not independent. Tree-based classifiers assume feature independence.
Other less severe issues:
Several questions that authors pose to empirically answer remained open and without answers, e.g., the questions in Section 6.2. There is nothing wrong with reporting failed expectations and failed methods. These are still informative. But when the number of these go up, it indicates that some of the choices were initially incorrect. This further confirms my concern about the lack of relation between some of the methods used in the paper and active learning.
When the authors describe the method BALD, they cite a paper and state that the paper used dropout to quantify uncertainty. The paper that actually used dropout for this purpose is a different one [1].
The two tasks used to evaluate the AL methods are very similar in nature.
Not sure why the authors have used bold faced fonts so frequently. They also have used red font a few times, which is not necessary.
[1] Gal and Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. ICML 2016. |
ICLR | Title
Real Data Distributions Prefer Simplicity and So Do Our Models: Why Machine Learning and Model Selection Are Possible
Abstract
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
N/A
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
1 INTRODUCTION
The problem of justifying inductive reasoning has plagued epistemologists since at least the 1700s (Hume, 1748). More recently, in the late 1990s, no free lunch theorems emerged from the computer science community as rigorous versions of arguments showing the impossibility of induction in contexts seemingly relevant to real machine learning problems (Wolpert, 1996; Wolpert & Macready, 1997). One such no free lunch theorem for supervised learning states that no single learner can solve every problem (Shalev-Shwartz & Ben-David, 2014). Another states that, assuming a world where the labeling functions of learning problems are drawn from a uniform distribution, no learner can reliably perform inference at all (Wolpert, 1996). Such a world would be altogether hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training set labels are entirely uninformative about unseen samples.
In contrast to this dismal outlook on machine learning, naturally occurring inference problems involve highly structured data. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems. In this work, we explore structure in both real-world data and modern machine learning models, primarily through the lens of Kolmogorov complexity.
The Kolmogorov complexity of some output is defined as the length of the shortest program under a fixed language that produces that output. In Section 3, we explain the connection between Kolmogorov complexity and compressibility. We note that virtually all random data cannot be significantly compressed, yet relevant datasets are highly compressible and hence substantially less complex. In particular, neural networks themselves can be used to create compression of data labelings, upper bounding their Kolmogorov complexity.
Complementing the low complexity of actual data, we then demonstrate in Section 4 that modern neural networks prefer low Kolmogorov complexity too. While models implemented on a computer provably cannot generate data with complexity exceeding the length of their associated program, we find they actually prefer data that is far simpler. We formulate simple languages for generating numerical sequences, under which we can directly measure the Kolmogorov complexity of a sequence. We use these languages to inspect the simplicity bias of both pre-trained and randomly initialized language models. GPT-3 (Brown et al., 2020) reliably favors less complex sequences, and bigger and better GPT-3 variants even more so. Notably, randomly initialized GPT models share this simplicity bias. As such, we can use them to predict the next element in a sequence as long as the target sequence is low-complexity. To further emphasize the universality of this simplicity bias, we cram tabular data from diverse domains, including click prediction and airline delay prediction, into convolutional computer vision architectures and show that these vision architectures prefer correct labelings to random ones, even on data which do not remotely resemble natural images. We use this property to compute cross-domain non-vacuous PAC-Bayes generalization bounds.
A common intuition associated with no free lunch theorems dictates that since a single learner cannot solve all problems, practitioners must inspect data and manually select an appropriate learner for the specific problem at hand. For example, a practitioner might select a more constrained model to avoid overfitting on small datasets, or they may select convolutional architectures to accommodate natural image data. To the contrary, we explain in Section 5 why model selection can be automated from the standpoint of PAC-Bayes theory, namely selecting between learning algorithms bears negligible complexity cost, and this cost is quickly overcome by gains in validation accuracy under reasonably sized validation splits. Moreover, a single learner, which on the one hand supports a variety of functions but on the other hand prefers simple ones, can solve a wide range of problems. We show that flexible models accompanied by a penalty which encourages simple solutions can solve problems with a variety of sample sizes. For example, a single learner which combines a small convolutional neural network and a transformer, while encouraging use of the small model insofar as it fits the training data, achieves strong performance both on small datasets like CIFAR-10 (Krizhevsky, 2009) and also large datasets like ImageNet (Deng et al., 2009). We highlight in Figure 1 that the historic evolution of machine learning systems supports the ability of a single learner to perform diverse tasks as highly task-specific pre-neural algorithms, such as Latent Dirichlet Allocation (Blei et al., 2003) and HOG (Dalal & Triggs, 2005), were replaced by neural architectures such as convolutional or recurrent models, and transformers can now handily perform all tasks listed.
Our contributions are summarized as follows:
• We demonstrate the direct connection between compressibility and learnability that is implicit in no free lunch theorems by deriving a new NFL theorem using Kolmogorov complexity.
• We show that the low Kolmogorv complexity of real datasets can be directly derived from the machine learning models used to fit them.
• We provide strong empirical evidence that neural networks have low complexity biases that are relevant even on data far from what they were designed for. To this end, we compute generalization bounds using convolutional vision models on tabular data, we showcase GPT-3’s strong preference for sequences generated by shorter expression trees, and we find that even randomly initialized language models have a simplicity bias.
• We argue why in principle model selection can be automated into a larger algorithm, and we provide explicit examples demonstrating how multiple models can be subsumed into a single powerful learner.
2 BACKGROUND
No free lunch theorems. No free lunch theorems (NFL) state that without making strong assumptions, a single algorithm cannot simultaneously solve all problems well. No free lunch theorems for search and optimization indicate that all optimizers and search algorithms satisfying certain conditions perform exactly the same on average over all such search or optimization problems (Wolpert et al., 1995; Wolpert & Macready, 1997). In this work, we narrow our focus to NFL for supervised learning. Wolpert (1996), and similarly Schaffer (1994), proves an analogous theorem for supervised learning under which every learner—a function that takes in labeled data and outputs a
labeling function for the associated domain—achieves exactly the same accuracy of 50% on average over all binary classification problems where accuracy is only evaluated on unseen samples.
In order to prove no free lunch theorems, one needs to place very strong assumptions on the lack of structure in the labeling function such as a uniform distribution, so that conditioning on the training labels does not modify the probability over labelings on unseen points (Rao et al., 1995). To illustrate the severity of this condition, imagine that a person is presented a sequence of one million 1s and asked whether they predict that the next element will be 1 or 0. If labelings of sequence elements were distributed uniformly, then we should assign equal probability to both options, despite the fact that intuition and Bayesian probabilistic models tell us overwhelming to favor 1.
Shalev-Shwartz & Ben-David (2014) instead do not assume a particular distribution over learning problems and prove that for every learner, there is a task on which the learner achieves poor accuracy with high probability over training splits, whereas another learner solves the same problem with perfect accuracy. Notably, the latter NFL computes accuracy over all data, not just “off-training” samples. While this statement of NFL does not require uniformly distributed data, if the existence of catastrophic failure modes for our learners is to be damning, our learners must encounter them in practice. After all, we do not care if our learners cannot solve problems we do not want to solve. Thus, the practical relevance of this theorem again hinges on the distribution over real-world learning problems and how well it aligns with the inductive bias of a learner. In this paper, we argue that the real-world learning problems we care about are highly structured, and the inductive biases of neural networks are well-aligned with such problems.
Kolmogorov complexity and compression. Kolmogorov complexity is a quantitative measure of complexity. For any fixed programming language L, the Kolmogorov complexity of data x, K(x), is the length of the shortest program in that language that outputs x (Kolmogorov, 1963). We can similarly define K(y|x) as the length of the shortest program which inputs x and outputs y. For definiteness, we assume that all programs and outputs are finite bitstrings.
Despite the fact that one cannot prove that any particular string has high complexity (Chaitin, 1974), all but exponentially few sequences of a given length have near maximal Kolmogorov complexity. Since there can only be less than 2n+1 programs of length less than or equal to n, then by the pigeonhole principle, there are at most 2n+1 distinct bitstrings x with K(x) ≤ n. Equivalently, drawing a random bitstring x of length n, there is at most a 21−k chance that K(x) ≤ n−k. As this probability decays exponentially in k, this bound shows that the vast majority of bitstrings of length n have complexity close to n. One way of upper bounding K(x) is to compress x, but then we must include both the size of the compressed file and the size of the program required to decompress it. Alternatively, if we can construct a short program which directly outputs x, this program also forms a compression of x. The above discussion then demonstrates that the vast majority of bitstrings do not admit a non-trivial compression.
Notions of Kolmogorov complexity and simplicity bias have also been discussed in the context of universal induction algorithms and machine learning more broadly. Notably, one can use Kolmogorov complexity to construct the Solomonoff prior (Solomonoff, 1964), P (x) = 2−[K(x)+2 logK(x)]/Z (with Z < 1), a formalization of the principle of Occam’s razor favoring simplicity. One can perform inference using just this prior via universal induction algorithms, and we discuss these matters in more detail along with complexity in deep learning in Appendix A.
PAC-Bayes compression bounds. Generalization bounds limit how a model’s expected risk R(h) will differ from its train risk R̂(h). PAC-Bayes generalization bounds (McAllester, 1998; 2003; 2013; Alquier, 2021) can be considered a generalization and improvement of the finite hypothesis bounds (Mohri et al., 2018) which provide a generalization guarantee even when the number of hypotheses is large (or even infinite) so long as the given hypothesis is likely under a prior P (h). The simplest version of the bound (McAllester, 1999) applied with a point mass posterior Q(h) = 1[h=h∗] states that
R(h) ≤ R̂(h) + √
− logP (h) + log(n/δ) + 2 2n− 1 (1)
with probability at least 1 − δ, where n is the size of the training set. Choosing the Solomonoff prior P (h) = 2−[K(h)+2 logK(h)]/Z as done in Lotfi et al. (2022b) and noting that the normalization constant Z ≤ 1, the prior likelihood of a given hypothesis can be bounded via
− log2 P (h) ≤ K(h) + 2 log2 K(h) ≤ C(h) + 2 log2 C(h) for any compression scheme C. Combining these two bounds provides a theoretical basis for the PAC-Bayes compression approach explored in other work (Zhou et al., 2018; Lotfi et al., 2022b) and can be used to derive nonvacuous generalization bounds even for large neural networks when using a strong compression scheme.
3 STRUCTURE IN REAL-WORLD DATA
The often cited no free lunch theorem of Wolpert (1996) states that all learners perform the same when averaged over a uniform distribution on all possible datasets. However, the assumption of uniform samples is very strong and extremely unnatural. This assumption subtly selects high complexity incompressible data, a set of problems on which learning is fundamentally impossible. Through means of hypothesis test we are able to conclusively rule out the possibility that real datasets are drawn from the uniform distribution, and therefore assertions that the NFL applies to problems of interest. We bring to the surface the centrality of incompressibility in NFL theorems by deriving a new NFL theorem directly from ideas of data compression and Kolmogorov complexity.
3.1 AN EXERCISE IN BOUNDING THE COMPLEXITY OF DATASETS
We first consider the hypothesis that unlabeled machine learning datasets are drawn uniformly at random and use a bound on the Kolmogorov complexity as a test statistic. Using bzip2, and including the size of the compression program, we compress text dataset Amazon Review Full (McAuley & Leskovec, 2013) and audio dataset LibriSpeech (Panayotov et al., 2015) to 393.2 MB and 8.36 GB respectively, providing upper bounds on the Kolmogorov complexity with respect to the python programming language. Supposing the datasets were in fact uniformly randomly sampled datasets, the probability of observing complexities this size or smaller is less than 2−2 32 and 2−2 36
, astronomically low p-values, conclusively ruling out the possibility that they were sampled in this way. If we randomly shuffle the datasets, we instead obtain bounds of only 836.7 MB and 9.69 GB, considerably larger, showing that the compressibility results not just from an inefficient encoding, but from structure in the dataset. Other works have also examined Kolmogorov complexity in data, for example EEG patterns (Petrosian, 1995) or animal behavior (Zenil et al., 2015), and likewise confirm that such naturally occurring data is simple.
3.2 NEURAL NETWORKS AS COMPRESSORS OF THE LABELING FUNCTION
More relevant to the context of supervised learning, we show that not only are the unlabeled datasets compressible—the labeling functions are too. Further, this can be demonstrated using trained models as compressors. Given a labeled dataset D = (X,Y ) = {(xi, yi)}ni=1, any likelihood model p(y|x)—regardless of whether the top predictions are correct—can be used to generate a lossless compression scheme to encode the dataset labels Y given the inputs X . Using a stream code such as arithmetic coding (Witten et al., 1987) in combination with the probability model p(y|x), the labels can be encoded in K(Y |X, p) ≤ −∑ni log2 p(yi|xi) + 3 bits. Models which maximize the log likelihood of the data also implicitly minimize the length of this encoding.
One can show by delimiting, as is done in Fortnow (2000), that K(Y |X) ≤ K(Y |X, p) +K(p) + 2 log2 K(p) + c, where c is some small constant depending on the language. Writing the negative log likelihood in terms of the empirical cross entropy, combining our two inequalities, and dividing by the size of the dataset n yields
1 nK(Y |X) ≤ CE/ln 2 + n−1(K(p) + 2 log2 K(p) + c), (2)
where CE is the cross entropy of the classifier p averaged over dataset D. This implies that, regardless of how large the model is—so long as CE is better than random guess—if the size n of the dataset is large enough, the model provides a non-trivial compression of the dataset. To demonstrate this fact, we employ the compression scheme from Lotfi et al. (2022b) in order to find a compressed representation of MLPs on several class balanced tabular classification datasets1. As shown in Figure 2 (middle), we are able to compress the labels on most of the datasets by well over the naive n log2 C encoding length where C is the number of classes. We also apply the method with convolutional architectures to compress labels on CIFAR-10 and CIFAR-100 in Figure 2 (right), allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence.
3.3 A KOLMOGOROV-STYLE NO FREE LUNCH THEOREM
A corollary of Equation 2 is that if the dataset is incompressible, then no model can do better than random chance in the limit of a large dataset. Since compressible datasets in uniformly sampled data are exponentially unlikely, we can prove our own version of the no free lunch theorem directly from this constraint on complexity. With very high probability, on any given uniformly sampled dataset, learning is impossible. Theorem 1. Let (X,Y ) be a labeled dataset with n data points and uniformly sampled random labels from C classes. Then, with probability at least 1− δ, for every classifier p(y|x),
CE(p) ≥ lnC − ln 2 n
( K(p) + 2 log2 K(p) + log2 1
δ + c
) (3)
where CE(p) is the empirical cross entropy of the classifier p(y|x) on the data. Thus for any model of bounded size, if the size of the dataset is large enough, the model cannot represent any classifier with cross entropy appreciably smaller than that attained from random guess.
Proof: See Appendix B
Like any of the no free lunch theorems, this initially seems limiting, but here we see that the incompressiblility of the datasets from the uniform distribution is the fundamental issue. On compressible datasets (ones with less than maximal complexity), learning is possible.
Takeaway: Real datasets are highly unlike the high complexity samples from the uniform distribution where learning is impossible.
4 LOW-COMPLEXITY BIAS IN MACHINE LEARNING MODELS
In the previous section, we saw that real-world data distributions across domains are highly structured and therefore have low Kolmogorov complexity. If we can construct models which prefer the same low-complexity data, then we can hope to perform inference with a single model across many such domains. While early machine learning systems incorporated highly domain-specific design elements, such as handcrafted image features (Dalal & Triggs, 2005) or graphical models for language modeling (Mnih & Hinton, 2007), modern neural network architectures across domains are converging on transformers (Vaswani et al., 2017; Dosovitskiy et al., 2020; Gulati et al., 2020; Somepalli et al., 2021), some of which can simultaneously achieve impressive performance on a variety of data types with a single architecture (Jaegle et al., 2021).
In this section, we argue that neural networks have a generic simplicity bias that extends well beyond the datasets for which they are designed. To this end, we: (1) feed tabular datasets from diverse
1https://www.openml.org/
domains such as click prediction and airline delay prediction into neural networks designed specifically for computer vision and find that they prefer naturally occurring labelings to random ones, (2) formulate a language with respect to which we can measure the Kolmogorov complexity of numerical sequences and observe that GPT-3 generates low-complexity sequences with exponentially higher probability than complex ones, (3) predict the next term in a sequence with randomly initialized language models. Whereas the no free lunch theorem of Wolpert (Wolpert, 1996) ensures that such an inference procedure cannot outperform a random guess on average, we find that randomly initialized neural networks prefer sequence completions which generate low-complexity completed sequences, demonstrating that they can make accurate guesses as long as the ground truth sequence distribution also favors low complexity.
4.1 NEURAL NETWORKS PREFER NATURALLY OCCURRING LABELINGS ACROSS DOMAINS
The inductive biases of even specialized architectures like convolutional neural networks facilitate broad learning abilities. To demonstrate this fact, we take tabular classification datasets and encode the tabular features as an image by simply forming images with one channel so that each pixel corresponds to a different feature, and padding the edges with zeros as needed. We train a small 9 layer convolutional network using this input data to predict the classification labels. Since the data has no local or translation equivariant structure, learning with the convolutional network requires breaking these structures. Even in spite of this mismatch, the convolutional networks perform much better than random chance. Furthermore, using the PAC-Bayes compression methodology from Lotfi et al. (2022b), we are able to find a compressible set of parameters providing nonvacuous generalization bounds on the model’s performance, as shown in Figure 2. Despite violating the locality and translation equivariance inductive biases of the architecture, the model is still highly compressible when fitting the tabular data and provably generalizes, suggesting that a core part of these inductive biases are general and shared with this tabular data.
Takeaway: While architectures such as CNNs have inductive biases tailored for specific problems, much of their inductive bias is general, extending across domains.
4.2 GPT-3 ASSIGNS EXPONENTIALLY HIGHER PROBABILITY TO SIMPLER SEQUENCES
We now study the preference of GPT-3—a line of high-performance autoregressive text generators— has for simpler sequences. The ability of language models to solve reasoning problems has recently been studied by Zelikman et al. (2022), who develop an effective prompting framework, and d’Ascoli et al. (2022), who develop transformers for predicting symbolic expressions directly from the sequence. To perform our own study, we need a well-defined, computable notion of complexity. We thus define a simple, non-Turing-complete language and measure Kolmogorov complexity with respect to this simple language. Namely, we generate integer sequences with binary expression trees. We then define the complexity of a sequence as the size of the smallest expression tree, measured as the number of internal nodes—or equivalently the number of operators in the expression represented by the tree—that generates that sequence. By using a small set of terms for the leaves and binary operators for the nodes, we can enumerate over all possible expression trees for sufficiently small sizes at most L and compute all sequences with complexities 0 through L.
In our experiments, we use operations +,×, and //, where // denotes integer division. For leaves, we use 2 and i, where i is the index within the sequence. For example, (2 + i) × i could be implemented with a tree of size 2 and would generate the sequence ai = 0, 3, 8, 15, ... Using this setup, we are able to generate sequences of varying complexity, according to a well-defined metric, and quantify the preference of GPT-3 models for simpler sequences over more complex ones. We provide details on how we tokenize sequences and extract their probabilities in Appendix D.
In Figure 3, we measure the average log-probability GPT-3 models assign to sequences of a given Kolmogorov complexity, where we fix the number of numerical tokens input into the model to be 30, and we observe that the probabilities assigned by these language models decrease exponentially with sequence complexity, similar to the Solomonoff prior discussed in Section 2. In contrast, a uniform prior would be described by a flat line. Note that for very complex sequences, we cannot easily measure their minimum description length, so we limit our experiments to expression trees with at most 7 operators. In this low-complexity regime, we observe that bigger GPT-3 models which are better at language modeling, such as Davinci which contains 175 billion parameters, assign higher probability to these simple sequences than much smaller GPT-3 models such as Ada. The legend lists GPT-3 variants in order of increasing size.
We can also examine the decay of such log-probabilities as we feed more tokens, corresponding to digits of sequence elements, into the model. As the sequences get longer, we see in Figure 3 that the probabilities assigned to sequences decay sub-exponentially, indicating that these models, especially bigger variants such as Davinci, become more and more confident about later sequence elements.
1
4.3 EVEN RANDOMLY INITIALIZED LANGUAGE MODELS PREFER LOW COMPLEXITY
The previous section solely examined pre-trained language models, but these models have been trained on massive corpora of data. Do they prefer low complexity before they have even seen any data at all? While initializers of such models follow highly diffuse distributions over parameters, such random parameters can induce highly structured functions.
Trained language models are known to repeat themselves when generating text (Holtzman et al., 2020; Fu et al., 2021). One might think that this behavior is learned from training data
which often contains repeated text, but we show in this section that randomly initialized GPT models repeat themselves too. Interestingly, we can formalize the preference for repetition as a preference for low Kolmogorov complexity. In order to disentangle the impact of initialization from training,
we adopt a simple language for generating binary sequences under which we can quickly measure Kolmogorov complexity. We consider a program to be a bitstring, and then the program upon execution simply repeats the bitstring until output reaches length 10. Under this language, the sequence 0, 0, 0, ... has Kolmogorov complexity 1, and 0, 1, 0, 1, ... has complexity 2, yet randomly generated sequences are exponentially more likely to have high complexity. We conduct our evaluations exhaustively on all such sequences of length 10.
We now generate sequences of length 10 with randomly initialized GPT-2 language models (Radford et al., 2019), such that each initialization is used to generate one sequence, and we measure the frequency with which each such sequence is generated. The estimated generation probabilities by Kolmogorov complexity are found in Figure 4, where we see again that low-complexity sequences are assigned exponentially higher probabilities. Here, we see that pre-trained checkpoints exhibit an even stronger preference for low complexity as they are trained on structured text. We can also use the randomly initialized language models to perform next element prediction by estimating the probabilities they assign to the next element in a sequence given having correctly generated the previous terms. While Wolpert’s no free lunch theorem (Wolpert, 1996) ensures that the average completion accuracy over all possible length 10 bitstrings is exactly 0.5, we verify in Figure 4 that randomly initialized networks can be used for sequence completion when the sequence is of low complexity. Additional details and experiments with other GPT-2 architecture variants are found in Appendix E.
We can further generate very long length-100 sequences with randomly initialized and also pretrained GPT-2 models and run a simple hypothesis test, demonstrating both randomly initialized and pre-trained models generate lower Kolmogorov complexity sequences on average than a uniform distribution. Over 100,000 samples from each of these three generative distributions and with a onetailed t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), where SGPT and SU respectively denote random sequences generated by the language model or a uniform distribution, we reject this null hypothesis in both randomly initialized and pre-trained models with an extremely low p-value, indicating that language models are indeed more likely to generate simple sequences, pretrained models even moreso. Further details can be found in Appendix E. We conclude that neural networks for language generation, both trained and randomly initialized, express a bias towards low Kolmogorov complexity which mirrors that of data as demonstrated in Section 3.
Takeaway: Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity.
5 MODEL SELECTION WITH A PREFERENCE FOR SIMPLICITY
In typical industrial workflows, practitioners examine their data and select an appropriate learner. For example, if data are linearly separable, then a practitioner might use logistic regression, whereas if the data is more complex, the same practitioner may instead choose a neural network. We can then consider the human model selector and the model they select as a single meta-learner. While early works conjectured that automated meta-learners are ruled out by no free lunch theorems (GiraudCarrier & Provost, 2005), subsequent works show that model selection can in fact be automated in practice (Vilalta & Drissi, 2002). Giraud-Carrier & Provost (2005) shows that with minimal assumptions, the defeating conclusion of the no free lunch theorem can be escaped by meta-learners. Subsequent works prove the existence of meta-induction algorithms, for selecting amongst all induction methods, which are near-optimal among learners (Schurz, 2008; Schurz & Thorn, 2017). In this section, we argue why in principle, model selection can be automated from the view of complexity.
5.1 MODEL SELECTION AND GENERALIZATION BOUNDS
When developing a machine learning approach for a given application, it is often helpful to leverage domain knowledge in constructing or choosing the right model for the task. One might start by choosing from architecture families like MLPs, CNNs, GNNs, PointNets, or Transformers and then decide on the appropriate way of featurizing the inputs, possibly incorporating knowledge of data symmetries in via hard-coded equivariances or data augmentations. Even just enumerating the possible models a practitioner would consider and choosing based on their expertise and prior knowledge, the number is not tremendously large. Even if we are extremely generous and suppose that the prac-
titioner is choosing from 100 million models, we can consider the impractical algorithm of selecting between them via cross validation. While one might expect that such a procedure would overfit, in fact even finite hypothesis bounds show this is not the case. Using cross validation on a validation set of size n = 20000 for a classification problem, plugging in a uniform prior P (h) = 10−8 to Equation 1, we get that the gap between validation and test error will be less than 2.9% with probability greater than 99%. Ultimately, this is the case because we only need a number of data points proportional to the log of the size of the hypothesis space for finite hypothesis bounds.
Considering even a more general class of models, one may consider the number of bits of prior knowledge needed to specify the model architectures like MLPs, CNNs, or GNNs as well as symmetry groups and any other required information. In each of these cases, the model architectures can in fact be expressed with a small number of bits. A near state-of-the-art computer vision model can be expressed in only 280 characters (Trockman & Kolter, 2022) of PyTorch. Similarly, important symmetry groups like translations, rotations, reflections, and other matrix groups can be expressed in only a few lines of code (Finzi et al., 2021) and can be used to encode equivariances or used for augmentation. Therefore, even in selecting from all possible models that can be expressed in that short amount of code, we can expect to generalize with only tens of thousands of data points.
Takeaway: In principle, automating model selection directly via cross validation provably generalizes well across millions of models with only thousands of data points.
5.2 ONE MODEL FOR BIG AND SMALL TRAINING SETS
It is commonly believed that small training datasets demand compact neural network architectures, whereas large training datasets can accommodate flexible architectures. Accordingly, practitioners hand select appropriate models for their datasets. In this section, we argue that a single learner can be effective for all data sizes as long as we encode our a priori preference for simplicity. Our prior should prefer simple functions we believe are more likely yet also support a wide variety of functions in case training data rules out our preferences. We begin with a simple illustration.
Polynomial regression. Common intuition dictates that high degree polynomials easily overfit their training data and should be avoided on small training sets. In contrast, low degree polynomials cannot fit complicated functions so they should be avoided when training data is plentiful. However, we demonstrate here that we can rely on lower order terms insofar as they can explain the data while still allowing higher order terms if they are necessary.
To this end, we adopt Tikhonov regularization with a Tikhonov matrix given by diag({αk2}dk=0), that is we impose an ℓ2 penalty which increases quadratically with the order of the corresponding monomial. We consider three models: non-regularized degree 2 and degree 10 polynomials as well as a degree 10 polynomial regularized as mentioned above. In Figure 5, we perform regression on the cosine target function (left) and degree 2 (center) and degree 10 (right) polynomial target functions. We confirm that even though a high degree polynomial model performs poorly with few training samples, it eventually outperforms its lower degree counterpart. Meanwhile, a high degree polynomial model with the above Tikhonov regularization outperforms both non-regularized models across most sample sizes on cosine and degree 10 polynomial regression problems, and it notably
matches the performance of a degree 2 model even on data generated by a degree 2 polynomial. We include details regarding the example target functions and the data sampling process in Appendix F.
Neural networks. We illustrate a similar concept with neural networks. We consider a small network, GoogLeNet (Szegedy et al., 2015), which performs well on small datasets such as CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) but poorly on larger datasets like ImageNet (Deng et al., 2009). We also consider a large network, ViT-B/16 (Dosovitskiy et al., 2020), which performs significantly worse on CIFAR variants but much better on ImageNet. As in the polynomial regression example above, we can combine these two architectures, specifying our preference for the simpler GoogLeNet to the extent that it fits the training data. To learn on a dataset, we train both models and then take a convex combination of their logits, c ∗ logitsViT + (1 − c) ∗ logitsG, controlled by a single parameter c with ℓ2 regularization in favor of GoogLeNet. In Figure 6, we observe that while GoogLeNet and ViT each have strengths
and weaknesses, combining them with a preference for simplicity achieves the best of both worlds. While aggressively restricting our architectures can decrease computational cost, it is unnecessary for generalization. Details and additional experiments with Swin Transformer (Liu et al., 2021) are found in Appendix F.
6 DISCUSSION
While large ML models are highly flexible, we saw in this work that they reliably prefer low Kolmogorov complexity solutions—aligning well with relevant learning problems—despite not being designed with complexity in mind. This observation raises the question: why exactly do neural networks encode such a strong preference for low complexity and how can we tune this preference? Complementing the above observation, we also saw that a single expressive model which simultaneously supports a wide variety of solutions but prefers simple ones can simultaneously solve simple and hard problems over sample sizes. Such learners present clear advantages over the current paradigm in deep learning in which we manually select small constrained architectures or large ones with mild inductive biases, depending on the problem. Keeping this possibility in mind, how can we design expressive yet simplicity-biased models but with affordable computational costs?
A UNIVERSAL INDUCTION AND COMPLEXITY IN DEEP LEARNING
Universal induction. Inspired by Kolmogorov complexity, a line of work introduces universal induction methods, which prefer low complexity answers (Solomonoff, 1964; Hutter, 2000; Nakkiran, 2021). Notably, Solomonoff induction (Solomonoff, 1964; Rathmanner & Hutter, 2011) places a probability distribution over bitstrings which vanishes exponentially with their Kolmogorov complexity with respect to a universal Turing machine. Then, presented with an observed sequence (e.g. training data) called a prefix and a candidate completion, we can perform inference via p([prefix, completion] | prefix). In other words, we assign probability to each completion proportional to its probability among sequences which begin with the prefix. Since low-complexity bitstrings are assigned higher probability, we will prefer completions which result in low-complexity completed sequences. Subsequent literature argues that Solomonoff induction explains why no free lunch theorems are practically irrelevant, namely weak assumptions on the structure of learning problems allow for near-optimal universal learners (Lattimore & Hutter, 2013).
Complexity in deep learning. Several works have related Kolmogorov complexity to neural networks. Pearlmutter & Rosenfeld (1990) argues that random initialization and noise in data increase the complexity of neural networks but that ensembling such models reduces complexity in expectation. Schmidhuber (1997) proposes another method for reducing Kolmogorov complexity by instead explicitly searching for simple neural networks and finds improvements in generalization on very small problems where such a search is computationally feasible. Another line of study proves that multi-layer perceptrons with boolean input features are biased towards low-entropy functions, namely ones which classify disproportionately many or few points into the same class (Mingard et al., 2019) or are insensitive to flips in the boolean features (De Palma et al., 2019).
Other notions of simplicity have also been proposed and applied to neural networks, for example involving the implicit bias of SGD (Damian et al., 2021); the connection between flat loss minima with wide-margins decision boundaries and improved generalization (Huang et al., 2020; Izmailov et al., 2018); the tendency of models to rely on simple, compressible features (Geirhos et al., 2020); or the potential for overparameterized networks to be implicitly regularized, leading to capacity control (Neyshabur et al., 2014). The marginal likelihood, or the probability that a random sample from the prior generates the data, is a natural statement of Occam’s razor or the idea that simple models should be preferred (Lotfi et al., 2022a). Diffuse priors which spread mass out around the diverse high-complexity solutions are unlikely to generate observed data, so effective models admit a far higher marginal likelihood on naturally occurring labelings compared to random ones (Wilson & Izmailov, 2020).
B A KOLMOGOROV NO FREE LUNCH THEOREM PROOF
Rewriting Equation (2), we have
CE(p) ≥ ln 2 n (K(Y |X)−K(p)− 2 log2 K(p)− c) .
Note that by simply counting all possible programs taking input X , there are less than 2k+1 labelings Y with K(Y |X) ≤ k. Note that there are Cn distinct labelings, from which we are drawing uniformly. So that
P(K(Y |X) > n log2 C −m) = 1− P(K(Y |X) ≤ n log2 C −m) ≥ 1− P(K(Y |X) ≤ ⌈n log2 C⌉ −m)
≥ 1− 2 ⌈n log2 C⌉−m+1
Cn
≥ 1− 22−m. Alternatively, with probability at least 1− δ,
K(Y |X) > n log2 C − log2 1
δ − 3.
Thus, with probability at least 1− δ, we have for every classifier p,
CE(p) ≥ lnC − ln 2 n (K(p) + 2 log2 K(p) + log2 δ + c) .
C PAC-BAYES COMPRESSION EXPERIMENTAL DETAILS
For the OpenML tabular classification datasets, we preprocess them first by balancing the classes, subsampling all classes down to the number of examples of the least likely class. This way, when compressing the datasets, any result achieved is nontrivial in contrast with a very class imbalanced dataset. We heavily follow the compression method of (Lotfi et al., 2022b), including the small 9 convolutional architecture which they use to generate their bounds. When cramming the tabular data into this convnet, we combine numerical features with one hot encoded categorical features and then pack these into the pixels of a 1 channel image, using however large an image as necessary to fit each of the different features inside.
With respect to the sizes of the random subspaces that we train the compressed models in, we consider 250,500,1000, and 2000. For tabular label compression, we employ a 2 hidden layer MLP with hidden dimension k = 192, and we consider the same 250,500,1000, and 2000 values for subspace dimension. We train for 80 epochs with 20 epochs of quantization at a batch size of 512 using Adam at lr= 3×10−4. For image classification label compression, we use the 9-layer convnet with subspace dimensions 2000, 3000, 5000, and we train for 80 epochs using SGD at learning rate 0.1 and quantize for the remaining 20 epochs, at a batch size of 50. For calculating the length of the code for model architecture and decompressor, we need only the implementation of the model, the arithmetic decoder, and the loading of the quantized values. Removing wasted bits, we minified the python file, leading to a size of approximately 2.5KB.
D GPT-3 EXPERIMENTAL DETAILS
To feed sequences into a model, we split up sequence elements into individual byte-pair encoding tokens corresponding to their decimal digits, and we place comma tokens between sequence elements as delimiters, also beginning every input with an <|endoftext|> token. We choose to use the byte-pair encoding of decimal digits with a space inserted before the digit, e.g. ‘ 0’ as this is known to enhance the ability of language models to perform arithmetic (Zelikman et al., 2022). For example, the sequence 10, 11 will be split up into [‘<|endoftext|>’, ‘ 1’, ‘ 0’, ‘,’, ‘ 1’, ‘ 1’], and each element of the list is tokenized individually. Then, the log-probability of a sequence is given by the sum of the log-probabilities corresponding to the correct decimal digits in their respective slots of the model’s output. Note that various sequences will contain different numbers of decimal digits, and the sequence’s log-probability will decrease with every token. Therefore, in order for fair comparison, we limit all sequences to 30 decimal digit tokens and truncate there.
E SEQUENCE GENERATION AND COMPLETION WITH RANDOMLY INITIALIZED LANGUAGE MODELS
For these experiments, we use Huggingface2 GPT-2 architectures and pre-trained checkpoints. In order to estimate the probabilities assigned by randomly initialized language models to each bitstring, we generate one million random sequences, ensuring that many instances of each bitstring are generated as there are only 210 = 1024 bistrings of length 10.
We also include plots with other sizes of GPT-2 architectures in Figure 7 and Figure 8.
We also include the hypothesis test referenced in Section 4.3. For this experiment, we generate 100,000 length-100 sequences from randomly intialized GPT-2 variants, pre-trained GPT-2 variants, and a uniform distribution. We then perform a one-sided t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), for both initialized and pre-trained models. Table 1 contains the resulting sample means, t-statistics and p-values. In all cases, we reject the null hypothesis with very low p-values, indicating that language models do prefer to generate low-complexity sequences. Notably, pre-trained language models exhibit an increased simplicity bias, and bigger and better language models even moreso.
2https://huggingface.co/
F BIG AND SMALL TRAINING SETS
Polynomial regression. We choose three example target functions on which to perform regression: cos( 3π2 x), x
2, and −36x + 49x5 − 14x7 + x10. Training data is randomly drawn from a uniform distribution over the unit interval, and we add noise to training labels from N (0, 0.1). In each case, for each dataset size, we average the mean squared error over 100 randomly sampled training sets. For Tikhonov regularized polynomial regression on the cosine and degree 2 polynomial target functions, we use α = 0.01, and we use α = 0.001 for regression on the degree 10 polynomial target function.
Image classification with neural networks. For ImageNet trained models, we employ publicly available checkpoints from torchvision3. We train models on CIFAR-10 and CIFAR-100 for 200 epochs with initial learning rate 0.1 and cosine annealing along with horizontal flip and random crop augmentations. We use SGD with momentum 0.9 and batches of size 128. All CIFAR images are rescaled to 224 × 224 so that we can use an identical model for ImageNet and CIFAR data. In order to learn the parameter c controlling the convex combination of models, we perform 10 epochs of training, where the models’ parameters are frozen, and we apply weight decay with coefficient 10−5. We learn the parameter c using SGD with momentum 0.9 and batch size 128, initial learning rate 0.1, and cosine annealing. | 1. What are the main contributions and novel aspects introduced by the paper regarding complexity, compression, and no free lunches in machine learning?
2. What are the weaknesses of the paper compared to prior works, particularly in providing new insights into foundational questions?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any missing discussions or experiments that could have strengthened the paper's contribution? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The submission considers several questions surrounding complexity, compression, and no free lunches in machine learning, and reports some results concerning compression of data with neural networks and simplicity biases in large language models.
Strengths And Weaknesses
Strengths
Clarity. The paper is written very straighforwardly and I had no trouble understanding the concepts nor the experimental explorations.
Weaknesses
The paper does not appear to provide any new insights into the foundational questions that it considers. For example, whether "the world is a special case of all problems, and thus a No Free Lunch in all generality does not apply," as discussed in the introduction of the submission, was considered explicitly in Fernandez-Delgado et al. (2014) and Gómez & Rojas (2016) with previous generations of machine learning technologies. Compression and simplicity via neural network models, as discussed in Section 3 of the submission, were considered as early as Hinton & van Camp (1993) and MacKay (1995), and were more recently considered by Blier & Ollivier (2018), among many examples. Simplicity bias and implicit regularization in neural networks, as discussed in Section 4, is already an active area of research, as noted in Appendix A. The submission does not answer any questions that could contribute to the development of these areas.
The explorations in Sections 3.1 and 3.2 do not make sense: The size of a dataset compressed with bzip2 or a neural network is the description length under a particular language (determined by the "compression algorithm"), and not the true (Kolmogorov) complexity as is implied. In other words, the "bound" provided is probably far from tight.
Missing discussions of double descent (e.g., Belkin et al. (2019)) and work in overparameterization more broadly, which are closely related to notions of simplicity in deep learning.
Clarity, Quality, Novelty And Reproducibility
The paper is written very straighforwardly and I had no trouble understanding the concepts nor the experimental explorations. |
ICLR | Title
Real Data Distributions Prefer Simplicity and So Do Our Models: Why Machine Learning and Model Selection Are Possible
Abstract
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
N/A
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
1 INTRODUCTION
The problem of justifying inductive reasoning has plagued epistemologists since at least the 1700s (Hume, 1748). More recently, in the late 1990s, no free lunch theorems emerged from the computer science community as rigorous versions of arguments showing the impossibility of induction in contexts seemingly relevant to real machine learning problems (Wolpert, 1996; Wolpert & Macready, 1997). One such no free lunch theorem for supervised learning states that no single learner can solve every problem (Shalev-Shwartz & Ben-David, 2014). Another states that, assuming a world where the labeling functions of learning problems are drawn from a uniform distribution, no learner can reliably perform inference at all (Wolpert, 1996). Such a world would be altogether hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training set labels are entirely uninformative about unseen samples.
In contrast to this dismal outlook on machine learning, naturally occurring inference problems involve highly structured data. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems. In this work, we explore structure in both real-world data and modern machine learning models, primarily through the lens of Kolmogorov complexity.
The Kolmogorov complexity of some output is defined as the length of the shortest program under a fixed language that produces that output. In Section 3, we explain the connection between Kolmogorov complexity and compressibility. We note that virtually all random data cannot be significantly compressed, yet relevant datasets are highly compressible and hence substantially less complex. In particular, neural networks themselves can be used to create compression of data labelings, upper bounding their Kolmogorov complexity.
Complementing the low complexity of actual data, we then demonstrate in Section 4 that modern neural networks prefer low Kolmogorov complexity too. While models implemented on a computer provably cannot generate data with complexity exceeding the length of their associated program, we find they actually prefer data that is far simpler. We formulate simple languages for generating numerical sequences, under which we can directly measure the Kolmogorov complexity of a sequence. We use these languages to inspect the simplicity bias of both pre-trained and randomly initialized language models. GPT-3 (Brown et al., 2020) reliably favors less complex sequences, and bigger and better GPT-3 variants even more so. Notably, randomly initialized GPT models share this simplicity bias. As such, we can use them to predict the next element in a sequence as long as the target sequence is low-complexity. To further emphasize the universality of this simplicity bias, we cram tabular data from diverse domains, including click prediction and airline delay prediction, into convolutional computer vision architectures and show that these vision architectures prefer correct labelings to random ones, even on data which do not remotely resemble natural images. We use this property to compute cross-domain non-vacuous PAC-Bayes generalization bounds.
A common intuition associated with no free lunch theorems dictates that since a single learner cannot solve all problems, practitioners must inspect data and manually select an appropriate learner for the specific problem at hand. For example, a practitioner might select a more constrained model to avoid overfitting on small datasets, or they may select convolutional architectures to accommodate natural image data. To the contrary, we explain in Section 5 why model selection can be automated from the standpoint of PAC-Bayes theory, namely selecting between learning algorithms bears negligible complexity cost, and this cost is quickly overcome by gains in validation accuracy under reasonably sized validation splits. Moreover, a single learner, which on the one hand supports a variety of functions but on the other hand prefers simple ones, can solve a wide range of problems. We show that flexible models accompanied by a penalty which encourages simple solutions can solve problems with a variety of sample sizes. For example, a single learner which combines a small convolutional neural network and a transformer, while encouraging use of the small model insofar as it fits the training data, achieves strong performance both on small datasets like CIFAR-10 (Krizhevsky, 2009) and also large datasets like ImageNet (Deng et al., 2009). We highlight in Figure 1 that the historic evolution of machine learning systems supports the ability of a single learner to perform diverse tasks as highly task-specific pre-neural algorithms, such as Latent Dirichlet Allocation (Blei et al., 2003) and HOG (Dalal & Triggs, 2005), were replaced by neural architectures such as convolutional or recurrent models, and transformers can now handily perform all tasks listed.
Our contributions are summarized as follows:
• We demonstrate the direct connection between compressibility and learnability that is implicit in no free lunch theorems by deriving a new NFL theorem using Kolmogorov complexity.
• We show that the low Kolmogorv complexity of real datasets can be directly derived from the machine learning models used to fit them.
• We provide strong empirical evidence that neural networks have low complexity biases that are relevant even on data far from what they were designed for. To this end, we compute generalization bounds using convolutional vision models on tabular data, we showcase GPT-3’s strong preference for sequences generated by shorter expression trees, and we find that even randomly initialized language models have a simplicity bias.
• We argue why in principle model selection can be automated into a larger algorithm, and we provide explicit examples demonstrating how multiple models can be subsumed into a single powerful learner.
2 BACKGROUND
No free lunch theorems. No free lunch theorems (NFL) state that without making strong assumptions, a single algorithm cannot simultaneously solve all problems well. No free lunch theorems for search and optimization indicate that all optimizers and search algorithms satisfying certain conditions perform exactly the same on average over all such search or optimization problems (Wolpert et al., 1995; Wolpert & Macready, 1997). In this work, we narrow our focus to NFL for supervised learning. Wolpert (1996), and similarly Schaffer (1994), proves an analogous theorem for supervised learning under which every learner—a function that takes in labeled data and outputs a
labeling function for the associated domain—achieves exactly the same accuracy of 50% on average over all binary classification problems where accuracy is only evaluated on unseen samples.
In order to prove no free lunch theorems, one needs to place very strong assumptions on the lack of structure in the labeling function such as a uniform distribution, so that conditioning on the training labels does not modify the probability over labelings on unseen points (Rao et al., 1995). To illustrate the severity of this condition, imagine that a person is presented a sequence of one million 1s and asked whether they predict that the next element will be 1 or 0. If labelings of sequence elements were distributed uniformly, then we should assign equal probability to both options, despite the fact that intuition and Bayesian probabilistic models tell us overwhelming to favor 1.
Shalev-Shwartz & Ben-David (2014) instead do not assume a particular distribution over learning problems and prove that for every learner, there is a task on which the learner achieves poor accuracy with high probability over training splits, whereas another learner solves the same problem with perfect accuracy. Notably, the latter NFL computes accuracy over all data, not just “off-training” samples. While this statement of NFL does not require uniformly distributed data, if the existence of catastrophic failure modes for our learners is to be damning, our learners must encounter them in practice. After all, we do not care if our learners cannot solve problems we do not want to solve. Thus, the practical relevance of this theorem again hinges on the distribution over real-world learning problems and how well it aligns with the inductive bias of a learner. In this paper, we argue that the real-world learning problems we care about are highly structured, and the inductive biases of neural networks are well-aligned with such problems.
Kolmogorov complexity and compression. Kolmogorov complexity is a quantitative measure of complexity. For any fixed programming language L, the Kolmogorov complexity of data x, K(x), is the length of the shortest program in that language that outputs x (Kolmogorov, 1963). We can similarly define K(y|x) as the length of the shortest program which inputs x and outputs y. For definiteness, we assume that all programs and outputs are finite bitstrings.
Despite the fact that one cannot prove that any particular string has high complexity (Chaitin, 1974), all but exponentially few sequences of a given length have near maximal Kolmogorov complexity. Since there can only be less than 2n+1 programs of length less than or equal to n, then by the pigeonhole principle, there are at most 2n+1 distinct bitstrings x with K(x) ≤ n. Equivalently, drawing a random bitstring x of length n, there is at most a 21−k chance that K(x) ≤ n−k. As this probability decays exponentially in k, this bound shows that the vast majority of bitstrings of length n have complexity close to n. One way of upper bounding K(x) is to compress x, but then we must include both the size of the compressed file and the size of the program required to decompress it. Alternatively, if we can construct a short program which directly outputs x, this program also forms a compression of x. The above discussion then demonstrates that the vast majority of bitstrings do not admit a non-trivial compression.
Notions of Kolmogorov complexity and simplicity bias have also been discussed in the context of universal induction algorithms and machine learning more broadly. Notably, one can use Kolmogorov complexity to construct the Solomonoff prior (Solomonoff, 1964), P (x) = 2−[K(x)+2 logK(x)]/Z (with Z < 1), a formalization of the principle of Occam’s razor favoring simplicity. One can perform inference using just this prior via universal induction algorithms, and we discuss these matters in more detail along with complexity in deep learning in Appendix A.
PAC-Bayes compression bounds. Generalization bounds limit how a model’s expected risk R(h) will differ from its train risk R̂(h). PAC-Bayes generalization bounds (McAllester, 1998; 2003; 2013; Alquier, 2021) can be considered a generalization and improvement of the finite hypothesis bounds (Mohri et al., 2018) which provide a generalization guarantee even when the number of hypotheses is large (or even infinite) so long as the given hypothesis is likely under a prior P (h). The simplest version of the bound (McAllester, 1999) applied with a point mass posterior Q(h) = 1[h=h∗] states that
R(h) ≤ R̂(h) + √
− logP (h) + log(n/δ) + 2 2n− 1 (1)
with probability at least 1 − δ, where n is the size of the training set. Choosing the Solomonoff prior P (h) = 2−[K(h)+2 logK(h)]/Z as done in Lotfi et al. (2022b) and noting that the normalization constant Z ≤ 1, the prior likelihood of a given hypothesis can be bounded via
− log2 P (h) ≤ K(h) + 2 log2 K(h) ≤ C(h) + 2 log2 C(h) for any compression scheme C. Combining these two bounds provides a theoretical basis for the PAC-Bayes compression approach explored in other work (Zhou et al., 2018; Lotfi et al., 2022b) and can be used to derive nonvacuous generalization bounds even for large neural networks when using a strong compression scheme.
3 STRUCTURE IN REAL-WORLD DATA
The often cited no free lunch theorem of Wolpert (1996) states that all learners perform the same when averaged over a uniform distribution on all possible datasets. However, the assumption of uniform samples is very strong and extremely unnatural. This assumption subtly selects high complexity incompressible data, a set of problems on which learning is fundamentally impossible. Through means of hypothesis test we are able to conclusively rule out the possibility that real datasets are drawn from the uniform distribution, and therefore assertions that the NFL applies to problems of interest. We bring to the surface the centrality of incompressibility in NFL theorems by deriving a new NFL theorem directly from ideas of data compression and Kolmogorov complexity.
3.1 AN EXERCISE IN BOUNDING THE COMPLEXITY OF DATASETS
We first consider the hypothesis that unlabeled machine learning datasets are drawn uniformly at random and use a bound on the Kolmogorov complexity as a test statistic. Using bzip2, and including the size of the compression program, we compress text dataset Amazon Review Full (McAuley & Leskovec, 2013) and audio dataset LibriSpeech (Panayotov et al., 2015) to 393.2 MB and 8.36 GB respectively, providing upper bounds on the Kolmogorov complexity with respect to the python programming language. Supposing the datasets were in fact uniformly randomly sampled datasets, the probability of observing complexities this size or smaller is less than 2−2 32 and 2−2 36
, astronomically low p-values, conclusively ruling out the possibility that they were sampled in this way. If we randomly shuffle the datasets, we instead obtain bounds of only 836.7 MB and 9.69 GB, considerably larger, showing that the compressibility results not just from an inefficient encoding, but from structure in the dataset. Other works have also examined Kolmogorov complexity in data, for example EEG patterns (Petrosian, 1995) or animal behavior (Zenil et al., 2015), and likewise confirm that such naturally occurring data is simple.
3.2 NEURAL NETWORKS AS COMPRESSORS OF THE LABELING FUNCTION
More relevant to the context of supervised learning, we show that not only are the unlabeled datasets compressible—the labeling functions are too. Further, this can be demonstrated using trained models as compressors. Given a labeled dataset D = (X,Y ) = {(xi, yi)}ni=1, any likelihood model p(y|x)—regardless of whether the top predictions are correct—can be used to generate a lossless compression scheme to encode the dataset labels Y given the inputs X . Using a stream code such as arithmetic coding (Witten et al., 1987) in combination with the probability model p(y|x), the labels can be encoded in K(Y |X, p) ≤ −∑ni log2 p(yi|xi) + 3 bits. Models which maximize the log likelihood of the data also implicitly minimize the length of this encoding.
One can show by delimiting, as is done in Fortnow (2000), that K(Y |X) ≤ K(Y |X, p) +K(p) + 2 log2 K(p) + c, where c is some small constant depending on the language. Writing the negative log likelihood in terms of the empirical cross entropy, combining our two inequalities, and dividing by the size of the dataset n yields
1 nK(Y |X) ≤ CE/ln 2 + n−1(K(p) + 2 log2 K(p) + c), (2)
where CE is the cross entropy of the classifier p averaged over dataset D. This implies that, regardless of how large the model is—so long as CE is better than random guess—if the size n of the dataset is large enough, the model provides a non-trivial compression of the dataset. To demonstrate this fact, we employ the compression scheme from Lotfi et al. (2022b) in order to find a compressed representation of MLPs on several class balanced tabular classification datasets1. As shown in Figure 2 (middle), we are able to compress the labels on most of the datasets by well over the naive n log2 C encoding length where C is the number of classes. We also apply the method with convolutional architectures to compress labels on CIFAR-10 and CIFAR-100 in Figure 2 (right), allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence.
3.3 A KOLMOGOROV-STYLE NO FREE LUNCH THEOREM
A corollary of Equation 2 is that if the dataset is incompressible, then no model can do better than random chance in the limit of a large dataset. Since compressible datasets in uniformly sampled data are exponentially unlikely, we can prove our own version of the no free lunch theorem directly from this constraint on complexity. With very high probability, on any given uniformly sampled dataset, learning is impossible. Theorem 1. Let (X,Y ) be a labeled dataset with n data points and uniformly sampled random labels from C classes. Then, with probability at least 1− δ, for every classifier p(y|x),
CE(p) ≥ lnC − ln 2 n
( K(p) + 2 log2 K(p) + log2 1
δ + c
) (3)
where CE(p) is the empirical cross entropy of the classifier p(y|x) on the data. Thus for any model of bounded size, if the size of the dataset is large enough, the model cannot represent any classifier with cross entropy appreciably smaller than that attained from random guess.
Proof: See Appendix B
Like any of the no free lunch theorems, this initially seems limiting, but here we see that the incompressiblility of the datasets from the uniform distribution is the fundamental issue. On compressible datasets (ones with less than maximal complexity), learning is possible.
Takeaway: Real datasets are highly unlike the high complexity samples from the uniform distribution where learning is impossible.
4 LOW-COMPLEXITY BIAS IN MACHINE LEARNING MODELS
In the previous section, we saw that real-world data distributions across domains are highly structured and therefore have low Kolmogorov complexity. If we can construct models which prefer the same low-complexity data, then we can hope to perform inference with a single model across many such domains. While early machine learning systems incorporated highly domain-specific design elements, such as handcrafted image features (Dalal & Triggs, 2005) or graphical models for language modeling (Mnih & Hinton, 2007), modern neural network architectures across domains are converging on transformers (Vaswani et al., 2017; Dosovitskiy et al., 2020; Gulati et al., 2020; Somepalli et al., 2021), some of which can simultaneously achieve impressive performance on a variety of data types with a single architecture (Jaegle et al., 2021).
In this section, we argue that neural networks have a generic simplicity bias that extends well beyond the datasets for which they are designed. To this end, we: (1) feed tabular datasets from diverse
1https://www.openml.org/
domains such as click prediction and airline delay prediction into neural networks designed specifically for computer vision and find that they prefer naturally occurring labelings to random ones, (2) formulate a language with respect to which we can measure the Kolmogorov complexity of numerical sequences and observe that GPT-3 generates low-complexity sequences with exponentially higher probability than complex ones, (3) predict the next term in a sequence with randomly initialized language models. Whereas the no free lunch theorem of Wolpert (Wolpert, 1996) ensures that such an inference procedure cannot outperform a random guess on average, we find that randomly initialized neural networks prefer sequence completions which generate low-complexity completed sequences, demonstrating that they can make accurate guesses as long as the ground truth sequence distribution also favors low complexity.
4.1 NEURAL NETWORKS PREFER NATURALLY OCCURRING LABELINGS ACROSS DOMAINS
The inductive biases of even specialized architectures like convolutional neural networks facilitate broad learning abilities. To demonstrate this fact, we take tabular classification datasets and encode the tabular features as an image by simply forming images with one channel so that each pixel corresponds to a different feature, and padding the edges with zeros as needed. We train a small 9 layer convolutional network using this input data to predict the classification labels. Since the data has no local or translation equivariant structure, learning with the convolutional network requires breaking these structures. Even in spite of this mismatch, the convolutional networks perform much better than random chance. Furthermore, using the PAC-Bayes compression methodology from Lotfi et al. (2022b), we are able to find a compressible set of parameters providing nonvacuous generalization bounds on the model’s performance, as shown in Figure 2. Despite violating the locality and translation equivariance inductive biases of the architecture, the model is still highly compressible when fitting the tabular data and provably generalizes, suggesting that a core part of these inductive biases are general and shared with this tabular data.
Takeaway: While architectures such as CNNs have inductive biases tailored for specific problems, much of their inductive bias is general, extending across domains.
4.2 GPT-3 ASSIGNS EXPONENTIALLY HIGHER PROBABILITY TO SIMPLER SEQUENCES
We now study the preference of GPT-3—a line of high-performance autoregressive text generators— has for simpler sequences. The ability of language models to solve reasoning problems has recently been studied by Zelikman et al. (2022), who develop an effective prompting framework, and d’Ascoli et al. (2022), who develop transformers for predicting symbolic expressions directly from the sequence. To perform our own study, we need a well-defined, computable notion of complexity. We thus define a simple, non-Turing-complete language and measure Kolmogorov complexity with respect to this simple language. Namely, we generate integer sequences with binary expression trees. We then define the complexity of a sequence as the size of the smallest expression tree, measured as the number of internal nodes—or equivalently the number of operators in the expression represented by the tree—that generates that sequence. By using a small set of terms for the leaves and binary operators for the nodes, we can enumerate over all possible expression trees for sufficiently small sizes at most L and compute all sequences with complexities 0 through L.
In our experiments, we use operations +,×, and //, where // denotes integer division. For leaves, we use 2 and i, where i is the index within the sequence. For example, (2 + i) × i could be implemented with a tree of size 2 and would generate the sequence ai = 0, 3, 8, 15, ... Using this setup, we are able to generate sequences of varying complexity, according to a well-defined metric, and quantify the preference of GPT-3 models for simpler sequences over more complex ones. We provide details on how we tokenize sequences and extract their probabilities in Appendix D.
In Figure 3, we measure the average log-probability GPT-3 models assign to sequences of a given Kolmogorov complexity, where we fix the number of numerical tokens input into the model to be 30, and we observe that the probabilities assigned by these language models decrease exponentially with sequence complexity, similar to the Solomonoff prior discussed in Section 2. In contrast, a uniform prior would be described by a flat line. Note that for very complex sequences, we cannot easily measure their minimum description length, so we limit our experiments to expression trees with at most 7 operators. In this low-complexity regime, we observe that bigger GPT-3 models which are better at language modeling, such as Davinci which contains 175 billion parameters, assign higher probability to these simple sequences than much smaller GPT-3 models such as Ada. The legend lists GPT-3 variants in order of increasing size.
We can also examine the decay of such log-probabilities as we feed more tokens, corresponding to digits of sequence elements, into the model. As the sequences get longer, we see in Figure 3 that the probabilities assigned to sequences decay sub-exponentially, indicating that these models, especially bigger variants such as Davinci, become more and more confident about later sequence elements.
1
4.3 EVEN RANDOMLY INITIALIZED LANGUAGE MODELS PREFER LOW COMPLEXITY
The previous section solely examined pre-trained language models, but these models have been trained on massive corpora of data. Do they prefer low complexity before they have even seen any data at all? While initializers of such models follow highly diffuse distributions over parameters, such random parameters can induce highly structured functions.
Trained language models are known to repeat themselves when generating text (Holtzman et al., 2020; Fu et al., 2021). One might think that this behavior is learned from training data
which often contains repeated text, but we show in this section that randomly initialized GPT models repeat themselves too. Interestingly, we can formalize the preference for repetition as a preference for low Kolmogorov complexity. In order to disentangle the impact of initialization from training,
we adopt a simple language for generating binary sequences under which we can quickly measure Kolmogorov complexity. We consider a program to be a bitstring, and then the program upon execution simply repeats the bitstring until output reaches length 10. Under this language, the sequence 0, 0, 0, ... has Kolmogorov complexity 1, and 0, 1, 0, 1, ... has complexity 2, yet randomly generated sequences are exponentially more likely to have high complexity. We conduct our evaluations exhaustively on all such sequences of length 10.
We now generate sequences of length 10 with randomly initialized GPT-2 language models (Radford et al., 2019), such that each initialization is used to generate one sequence, and we measure the frequency with which each such sequence is generated. The estimated generation probabilities by Kolmogorov complexity are found in Figure 4, where we see again that low-complexity sequences are assigned exponentially higher probabilities. Here, we see that pre-trained checkpoints exhibit an even stronger preference for low complexity as they are trained on structured text. We can also use the randomly initialized language models to perform next element prediction by estimating the probabilities they assign to the next element in a sequence given having correctly generated the previous terms. While Wolpert’s no free lunch theorem (Wolpert, 1996) ensures that the average completion accuracy over all possible length 10 bitstrings is exactly 0.5, we verify in Figure 4 that randomly initialized networks can be used for sequence completion when the sequence is of low complexity. Additional details and experiments with other GPT-2 architecture variants are found in Appendix E.
We can further generate very long length-100 sequences with randomly initialized and also pretrained GPT-2 models and run a simple hypothesis test, demonstrating both randomly initialized and pre-trained models generate lower Kolmogorov complexity sequences on average than a uniform distribution. Over 100,000 samples from each of these three generative distributions and with a onetailed t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), where SGPT and SU respectively denote random sequences generated by the language model or a uniform distribution, we reject this null hypothesis in both randomly initialized and pre-trained models with an extremely low p-value, indicating that language models are indeed more likely to generate simple sequences, pretrained models even moreso. Further details can be found in Appendix E. We conclude that neural networks for language generation, both trained and randomly initialized, express a bias towards low Kolmogorov complexity which mirrors that of data as demonstrated in Section 3.
Takeaway: Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity.
5 MODEL SELECTION WITH A PREFERENCE FOR SIMPLICITY
In typical industrial workflows, practitioners examine their data and select an appropriate learner. For example, if data are linearly separable, then a practitioner might use logistic regression, whereas if the data is more complex, the same practitioner may instead choose a neural network. We can then consider the human model selector and the model they select as a single meta-learner. While early works conjectured that automated meta-learners are ruled out by no free lunch theorems (GiraudCarrier & Provost, 2005), subsequent works show that model selection can in fact be automated in practice (Vilalta & Drissi, 2002). Giraud-Carrier & Provost (2005) shows that with minimal assumptions, the defeating conclusion of the no free lunch theorem can be escaped by meta-learners. Subsequent works prove the existence of meta-induction algorithms, for selecting amongst all induction methods, which are near-optimal among learners (Schurz, 2008; Schurz & Thorn, 2017). In this section, we argue why in principle, model selection can be automated from the view of complexity.
5.1 MODEL SELECTION AND GENERALIZATION BOUNDS
When developing a machine learning approach for a given application, it is often helpful to leverage domain knowledge in constructing or choosing the right model for the task. One might start by choosing from architecture families like MLPs, CNNs, GNNs, PointNets, or Transformers and then decide on the appropriate way of featurizing the inputs, possibly incorporating knowledge of data symmetries in via hard-coded equivariances or data augmentations. Even just enumerating the possible models a practitioner would consider and choosing based on their expertise and prior knowledge, the number is not tremendously large. Even if we are extremely generous and suppose that the prac-
titioner is choosing from 100 million models, we can consider the impractical algorithm of selecting between them via cross validation. While one might expect that such a procedure would overfit, in fact even finite hypothesis bounds show this is not the case. Using cross validation on a validation set of size n = 20000 for a classification problem, plugging in a uniform prior P (h) = 10−8 to Equation 1, we get that the gap between validation and test error will be less than 2.9% with probability greater than 99%. Ultimately, this is the case because we only need a number of data points proportional to the log of the size of the hypothesis space for finite hypothesis bounds.
Considering even a more general class of models, one may consider the number of bits of prior knowledge needed to specify the model architectures like MLPs, CNNs, or GNNs as well as symmetry groups and any other required information. In each of these cases, the model architectures can in fact be expressed with a small number of bits. A near state-of-the-art computer vision model can be expressed in only 280 characters (Trockman & Kolter, 2022) of PyTorch. Similarly, important symmetry groups like translations, rotations, reflections, and other matrix groups can be expressed in only a few lines of code (Finzi et al., 2021) and can be used to encode equivariances or used for augmentation. Therefore, even in selecting from all possible models that can be expressed in that short amount of code, we can expect to generalize with only tens of thousands of data points.
Takeaway: In principle, automating model selection directly via cross validation provably generalizes well across millions of models with only thousands of data points.
5.2 ONE MODEL FOR BIG AND SMALL TRAINING SETS
It is commonly believed that small training datasets demand compact neural network architectures, whereas large training datasets can accommodate flexible architectures. Accordingly, practitioners hand select appropriate models for their datasets. In this section, we argue that a single learner can be effective for all data sizes as long as we encode our a priori preference for simplicity. Our prior should prefer simple functions we believe are more likely yet also support a wide variety of functions in case training data rules out our preferences. We begin with a simple illustration.
Polynomial regression. Common intuition dictates that high degree polynomials easily overfit their training data and should be avoided on small training sets. In contrast, low degree polynomials cannot fit complicated functions so they should be avoided when training data is plentiful. However, we demonstrate here that we can rely on lower order terms insofar as they can explain the data while still allowing higher order terms if they are necessary.
To this end, we adopt Tikhonov regularization with a Tikhonov matrix given by diag({αk2}dk=0), that is we impose an ℓ2 penalty which increases quadratically with the order of the corresponding monomial. We consider three models: non-regularized degree 2 and degree 10 polynomials as well as a degree 10 polynomial regularized as mentioned above. In Figure 5, we perform regression on the cosine target function (left) and degree 2 (center) and degree 10 (right) polynomial target functions. We confirm that even though a high degree polynomial model performs poorly with few training samples, it eventually outperforms its lower degree counterpart. Meanwhile, a high degree polynomial model with the above Tikhonov regularization outperforms both non-regularized models across most sample sizes on cosine and degree 10 polynomial regression problems, and it notably
matches the performance of a degree 2 model even on data generated by a degree 2 polynomial. We include details regarding the example target functions and the data sampling process in Appendix F.
Neural networks. We illustrate a similar concept with neural networks. We consider a small network, GoogLeNet (Szegedy et al., 2015), which performs well on small datasets such as CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) but poorly on larger datasets like ImageNet (Deng et al., 2009). We also consider a large network, ViT-B/16 (Dosovitskiy et al., 2020), which performs significantly worse on CIFAR variants but much better on ImageNet. As in the polynomial regression example above, we can combine these two architectures, specifying our preference for the simpler GoogLeNet to the extent that it fits the training data. To learn on a dataset, we train both models and then take a convex combination of their logits, c ∗ logitsViT + (1 − c) ∗ logitsG, controlled by a single parameter c with ℓ2 regularization in favor of GoogLeNet. In Figure 6, we observe that while GoogLeNet and ViT each have strengths
and weaknesses, combining them with a preference for simplicity achieves the best of both worlds. While aggressively restricting our architectures can decrease computational cost, it is unnecessary for generalization. Details and additional experiments with Swin Transformer (Liu et al., 2021) are found in Appendix F.
6 DISCUSSION
While large ML models are highly flexible, we saw in this work that they reliably prefer low Kolmogorov complexity solutions—aligning well with relevant learning problems—despite not being designed with complexity in mind. This observation raises the question: why exactly do neural networks encode such a strong preference for low complexity and how can we tune this preference? Complementing the above observation, we also saw that a single expressive model which simultaneously supports a wide variety of solutions but prefers simple ones can simultaneously solve simple and hard problems over sample sizes. Such learners present clear advantages over the current paradigm in deep learning in which we manually select small constrained architectures or large ones with mild inductive biases, depending on the problem. Keeping this possibility in mind, how can we design expressive yet simplicity-biased models but with affordable computational costs?
A UNIVERSAL INDUCTION AND COMPLEXITY IN DEEP LEARNING
Universal induction. Inspired by Kolmogorov complexity, a line of work introduces universal induction methods, which prefer low complexity answers (Solomonoff, 1964; Hutter, 2000; Nakkiran, 2021). Notably, Solomonoff induction (Solomonoff, 1964; Rathmanner & Hutter, 2011) places a probability distribution over bitstrings which vanishes exponentially with their Kolmogorov complexity with respect to a universal Turing machine. Then, presented with an observed sequence (e.g. training data) called a prefix and a candidate completion, we can perform inference via p([prefix, completion] | prefix). In other words, we assign probability to each completion proportional to its probability among sequences which begin with the prefix. Since low-complexity bitstrings are assigned higher probability, we will prefer completions which result in low-complexity completed sequences. Subsequent literature argues that Solomonoff induction explains why no free lunch theorems are practically irrelevant, namely weak assumptions on the structure of learning problems allow for near-optimal universal learners (Lattimore & Hutter, 2013).
Complexity in deep learning. Several works have related Kolmogorov complexity to neural networks. Pearlmutter & Rosenfeld (1990) argues that random initialization and noise in data increase the complexity of neural networks but that ensembling such models reduces complexity in expectation. Schmidhuber (1997) proposes another method for reducing Kolmogorov complexity by instead explicitly searching for simple neural networks and finds improvements in generalization on very small problems where such a search is computationally feasible. Another line of study proves that multi-layer perceptrons with boolean input features are biased towards low-entropy functions, namely ones which classify disproportionately many or few points into the same class (Mingard et al., 2019) or are insensitive to flips in the boolean features (De Palma et al., 2019).
Other notions of simplicity have also been proposed and applied to neural networks, for example involving the implicit bias of SGD (Damian et al., 2021); the connection between flat loss minima with wide-margins decision boundaries and improved generalization (Huang et al., 2020; Izmailov et al., 2018); the tendency of models to rely on simple, compressible features (Geirhos et al., 2020); or the potential for overparameterized networks to be implicitly regularized, leading to capacity control (Neyshabur et al., 2014). The marginal likelihood, or the probability that a random sample from the prior generates the data, is a natural statement of Occam’s razor or the idea that simple models should be preferred (Lotfi et al., 2022a). Diffuse priors which spread mass out around the diverse high-complexity solutions are unlikely to generate observed data, so effective models admit a far higher marginal likelihood on naturally occurring labelings compared to random ones (Wilson & Izmailov, 2020).
B A KOLMOGOROV NO FREE LUNCH THEOREM PROOF
Rewriting Equation (2), we have
CE(p) ≥ ln 2 n (K(Y |X)−K(p)− 2 log2 K(p)− c) .
Note that by simply counting all possible programs taking input X , there are less than 2k+1 labelings Y with K(Y |X) ≤ k. Note that there are Cn distinct labelings, from which we are drawing uniformly. So that
P(K(Y |X) > n log2 C −m) = 1− P(K(Y |X) ≤ n log2 C −m) ≥ 1− P(K(Y |X) ≤ ⌈n log2 C⌉ −m)
≥ 1− 2 ⌈n log2 C⌉−m+1
Cn
≥ 1− 22−m. Alternatively, with probability at least 1− δ,
K(Y |X) > n log2 C − log2 1
δ − 3.
Thus, with probability at least 1− δ, we have for every classifier p,
CE(p) ≥ lnC − ln 2 n (K(p) + 2 log2 K(p) + log2 δ + c) .
C PAC-BAYES COMPRESSION EXPERIMENTAL DETAILS
For the OpenML tabular classification datasets, we preprocess them first by balancing the classes, subsampling all classes down to the number of examples of the least likely class. This way, when compressing the datasets, any result achieved is nontrivial in contrast with a very class imbalanced dataset. We heavily follow the compression method of (Lotfi et al., 2022b), including the small 9 convolutional architecture which they use to generate their bounds. When cramming the tabular data into this convnet, we combine numerical features with one hot encoded categorical features and then pack these into the pixels of a 1 channel image, using however large an image as necessary to fit each of the different features inside.
With respect to the sizes of the random subspaces that we train the compressed models in, we consider 250,500,1000, and 2000. For tabular label compression, we employ a 2 hidden layer MLP with hidden dimension k = 192, and we consider the same 250,500,1000, and 2000 values for subspace dimension. We train for 80 epochs with 20 epochs of quantization at a batch size of 512 using Adam at lr= 3×10−4. For image classification label compression, we use the 9-layer convnet with subspace dimensions 2000, 3000, 5000, and we train for 80 epochs using SGD at learning rate 0.1 and quantize for the remaining 20 epochs, at a batch size of 50. For calculating the length of the code for model architecture and decompressor, we need only the implementation of the model, the arithmetic decoder, and the loading of the quantized values. Removing wasted bits, we minified the python file, leading to a size of approximately 2.5KB.
D GPT-3 EXPERIMENTAL DETAILS
To feed sequences into a model, we split up sequence elements into individual byte-pair encoding tokens corresponding to their decimal digits, and we place comma tokens between sequence elements as delimiters, also beginning every input with an <|endoftext|> token. We choose to use the byte-pair encoding of decimal digits with a space inserted before the digit, e.g. ‘ 0’ as this is known to enhance the ability of language models to perform arithmetic (Zelikman et al., 2022). For example, the sequence 10, 11 will be split up into [‘<|endoftext|>’, ‘ 1’, ‘ 0’, ‘,’, ‘ 1’, ‘ 1’], and each element of the list is tokenized individually. Then, the log-probability of a sequence is given by the sum of the log-probabilities corresponding to the correct decimal digits in their respective slots of the model’s output. Note that various sequences will contain different numbers of decimal digits, and the sequence’s log-probability will decrease with every token. Therefore, in order for fair comparison, we limit all sequences to 30 decimal digit tokens and truncate there.
E SEQUENCE GENERATION AND COMPLETION WITH RANDOMLY INITIALIZED LANGUAGE MODELS
For these experiments, we use Huggingface2 GPT-2 architectures and pre-trained checkpoints. In order to estimate the probabilities assigned by randomly initialized language models to each bitstring, we generate one million random sequences, ensuring that many instances of each bitstring are generated as there are only 210 = 1024 bistrings of length 10.
We also include plots with other sizes of GPT-2 architectures in Figure 7 and Figure 8.
We also include the hypothesis test referenced in Section 4.3. For this experiment, we generate 100,000 length-100 sequences from randomly intialized GPT-2 variants, pre-trained GPT-2 variants, and a uniform distribution. We then perform a one-sided t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), for both initialized and pre-trained models. Table 1 contains the resulting sample means, t-statistics and p-values. In all cases, we reject the null hypothesis with very low p-values, indicating that language models do prefer to generate low-complexity sequences. Notably, pre-trained language models exhibit an increased simplicity bias, and bigger and better language models even moreso.
2https://huggingface.co/
F BIG AND SMALL TRAINING SETS
Polynomial regression. We choose three example target functions on which to perform regression: cos( 3π2 x), x
2, and −36x + 49x5 − 14x7 + x10. Training data is randomly drawn from a uniform distribution over the unit interval, and we add noise to training labels from N (0, 0.1). In each case, for each dataset size, we average the mean squared error over 100 randomly sampled training sets. For Tikhonov regularized polynomial regression on the cosine and degree 2 polynomial target functions, we use α = 0.01, and we use α = 0.001 for regression on the degree 10 polynomial target function.
Image classification with neural networks. For ImageNet trained models, we employ publicly available checkpoints from torchvision3. We train models on CIFAR-10 and CIFAR-100 for 200 epochs with initial learning rate 0.1 and cosine annealing along with horizontal flip and random crop augmentations. We use SGD with momentum 0.9 and batches of size 128. All CIFAR images are rescaled to 224 × 224 so that we can use an identical model for ImageNet and CIFAR data. In order to learn the parameter c controlling the convex combination of models, we perform 10 epochs of training, where the models’ parameters are frozen, and we apply weight decay with coefficient 10−5. We learn the parameter c using SGD with momentum 0.9 and batch size 128, initial learning rate 0.1, and cosine annealing. | 1. What is the focus of the paper regarding low-complexity bias in neural networks and datasets?
2. What are the strengths and weaknesses of the proposed approach, particularly in explaining the preference for low-complexity labels?
3. Do you have any concerns regarding the generalizability of the results across different machine learning models?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the experimental design or analysis? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper studies the low-complexity bias of empirically used datasets and neural network architectures, and uses that to explain why no free lunch theorems do not apply to these empirical settings.
Regarding datasets, this paper shows that the labels of various popular datasets have low complexity. Formally they show that for a classification task with C classes, the Kolmogorov complexity of labels of n inputs given the inputs is significantly smaller than n*log(C).
Regarding neural network architectures, the paper shows that: a) Architecture like CNNs which are usually used for images can be trained to fit tabular datasets and hence compress the labels of these tabular datasets. b) Both trained and randomly initialized language models assign higher probability to strings of lower complexity.
Finally, the paper also shows that model selection can be automated by trying multiple models and using cross-validation for model selection.
Strengths And Weaknesses
Many of the results in the paper are expected. For example, the final result about combining a small and big model for having good performance on both CIFAR-10 and Imagenet directly follows from the fact that the smaller model has good performance on CIFAR-10 and the larger one has good performance on Imagenet.
Other results such as those in 3.2, 4.1 ("Neural networks prefer naturally occurring labelings across domains") and 4.3 ("Even randomly initialized language models prefer low complexity") could be interesting if it was shown that other models such as SVMs and HMMs respectively do not have this property. Otherwise, it is not clear why the authors are singling our neural networks in these sections. For example, are the non-vacuous bounds obtained in 4.1 generally true for most machine learning models or is the result specific to neural networks? If they hold for most models then it casts doubt on the claims that these results explain the recent trend of machine learning models to converge to only a few varieties.
Results in section 4.2 which show that a pretrained GPT-3 assigns exponentially decaying probability to strings as their complexity increases are the most interesting part of the paper. Given the in-context learning capabilities of large language models, we do expect that the probability of string should decrease with its complexity. But the empirical results showing that the decay is (nearly) exponential with complexity are a novel contribution.
Other remarks:
The results in Section 4.3 only seem to be significant when complexity is 1, which would correspond to to the either the all 1s or the all 0s string. Is this occurring because randomly initialized models predictions of 1 vs 0 are unbalanced? In particular, how would a model which outputs 1 with probability p and 0 with probability 1-p irrespective of the context compare to the randomly initialized model?
Clarity, Quality, Novelty And Reproducibility
The paper is written in a clear and understandable way. As described earlier many of the results are expected from earlier works. |
ICLR | Title
Real Data Distributions Prefer Simplicity and So Do Our Models: Why Machine Learning and Model Selection Are Possible
Abstract
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
N/A
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
1 INTRODUCTION
The problem of justifying inductive reasoning has plagued epistemologists since at least the 1700s (Hume, 1748). More recently, in the late 1990s, no free lunch theorems emerged from the computer science community as rigorous versions of arguments showing the impossibility of induction in contexts seemingly relevant to real machine learning problems (Wolpert, 1996; Wolpert & Macready, 1997). One such no free lunch theorem for supervised learning states that no single learner can solve every problem (Shalev-Shwartz & Ben-David, 2014). Another states that, assuming a world where the labeling functions of learning problems are drawn from a uniform distribution, no learner can reliably perform inference at all (Wolpert, 1996). Such a world would be altogether hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training set labels are entirely uninformative about unseen samples.
In contrast to this dismal outlook on machine learning, naturally occurring inference problems involve highly structured data. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems. In this work, we explore structure in both real-world data and modern machine learning models, primarily through the lens of Kolmogorov complexity.
The Kolmogorov complexity of some output is defined as the length of the shortest program under a fixed language that produces that output. In Section 3, we explain the connection between Kolmogorov complexity and compressibility. We note that virtually all random data cannot be significantly compressed, yet relevant datasets are highly compressible and hence substantially less complex. In particular, neural networks themselves can be used to create compression of data labelings, upper bounding their Kolmogorov complexity.
Complementing the low complexity of actual data, we then demonstrate in Section 4 that modern neural networks prefer low Kolmogorov complexity too. While models implemented on a computer provably cannot generate data with complexity exceeding the length of their associated program, we find they actually prefer data that is far simpler. We formulate simple languages for generating numerical sequences, under which we can directly measure the Kolmogorov complexity of a sequence. We use these languages to inspect the simplicity bias of both pre-trained and randomly initialized language models. GPT-3 (Brown et al., 2020) reliably favors less complex sequences, and bigger and better GPT-3 variants even more so. Notably, randomly initialized GPT models share this simplicity bias. As such, we can use them to predict the next element in a sequence as long as the target sequence is low-complexity. To further emphasize the universality of this simplicity bias, we cram tabular data from diverse domains, including click prediction and airline delay prediction, into convolutional computer vision architectures and show that these vision architectures prefer correct labelings to random ones, even on data which do not remotely resemble natural images. We use this property to compute cross-domain non-vacuous PAC-Bayes generalization bounds.
A common intuition associated with no free lunch theorems dictates that since a single learner cannot solve all problems, practitioners must inspect data and manually select an appropriate learner for the specific problem at hand. For example, a practitioner might select a more constrained model to avoid overfitting on small datasets, or they may select convolutional architectures to accommodate natural image data. To the contrary, we explain in Section 5 why model selection can be automated from the standpoint of PAC-Bayes theory, namely selecting between learning algorithms bears negligible complexity cost, and this cost is quickly overcome by gains in validation accuracy under reasonably sized validation splits. Moreover, a single learner, which on the one hand supports a variety of functions but on the other hand prefers simple ones, can solve a wide range of problems. We show that flexible models accompanied by a penalty which encourages simple solutions can solve problems with a variety of sample sizes. For example, a single learner which combines a small convolutional neural network and a transformer, while encouraging use of the small model insofar as it fits the training data, achieves strong performance both on small datasets like CIFAR-10 (Krizhevsky, 2009) and also large datasets like ImageNet (Deng et al., 2009). We highlight in Figure 1 that the historic evolution of machine learning systems supports the ability of a single learner to perform diverse tasks as highly task-specific pre-neural algorithms, such as Latent Dirichlet Allocation (Blei et al., 2003) and HOG (Dalal & Triggs, 2005), were replaced by neural architectures such as convolutional or recurrent models, and transformers can now handily perform all tasks listed.
Our contributions are summarized as follows:
• We demonstrate the direct connection between compressibility and learnability that is implicit in no free lunch theorems by deriving a new NFL theorem using Kolmogorov complexity.
• We show that the low Kolmogorv complexity of real datasets can be directly derived from the machine learning models used to fit them.
• We provide strong empirical evidence that neural networks have low complexity biases that are relevant even on data far from what they were designed for. To this end, we compute generalization bounds using convolutional vision models on tabular data, we showcase GPT-3’s strong preference for sequences generated by shorter expression trees, and we find that even randomly initialized language models have a simplicity bias.
• We argue why in principle model selection can be automated into a larger algorithm, and we provide explicit examples demonstrating how multiple models can be subsumed into a single powerful learner.
2 BACKGROUND
No free lunch theorems. No free lunch theorems (NFL) state that without making strong assumptions, a single algorithm cannot simultaneously solve all problems well. No free lunch theorems for search and optimization indicate that all optimizers and search algorithms satisfying certain conditions perform exactly the same on average over all such search or optimization problems (Wolpert et al., 1995; Wolpert & Macready, 1997). In this work, we narrow our focus to NFL for supervised learning. Wolpert (1996), and similarly Schaffer (1994), proves an analogous theorem for supervised learning under which every learner—a function that takes in labeled data and outputs a
labeling function for the associated domain—achieves exactly the same accuracy of 50% on average over all binary classification problems where accuracy is only evaluated on unseen samples.
In order to prove no free lunch theorems, one needs to place very strong assumptions on the lack of structure in the labeling function such as a uniform distribution, so that conditioning on the training labels does not modify the probability over labelings on unseen points (Rao et al., 1995). To illustrate the severity of this condition, imagine that a person is presented a sequence of one million 1s and asked whether they predict that the next element will be 1 or 0. If labelings of sequence elements were distributed uniformly, then we should assign equal probability to both options, despite the fact that intuition and Bayesian probabilistic models tell us overwhelming to favor 1.
Shalev-Shwartz & Ben-David (2014) instead do not assume a particular distribution over learning problems and prove that for every learner, there is a task on which the learner achieves poor accuracy with high probability over training splits, whereas another learner solves the same problem with perfect accuracy. Notably, the latter NFL computes accuracy over all data, not just “off-training” samples. While this statement of NFL does not require uniformly distributed data, if the existence of catastrophic failure modes for our learners is to be damning, our learners must encounter them in practice. After all, we do not care if our learners cannot solve problems we do not want to solve. Thus, the practical relevance of this theorem again hinges on the distribution over real-world learning problems and how well it aligns with the inductive bias of a learner. In this paper, we argue that the real-world learning problems we care about are highly structured, and the inductive biases of neural networks are well-aligned with such problems.
Kolmogorov complexity and compression. Kolmogorov complexity is a quantitative measure of complexity. For any fixed programming language L, the Kolmogorov complexity of data x, K(x), is the length of the shortest program in that language that outputs x (Kolmogorov, 1963). We can similarly define K(y|x) as the length of the shortest program which inputs x and outputs y. For definiteness, we assume that all programs and outputs are finite bitstrings.
Despite the fact that one cannot prove that any particular string has high complexity (Chaitin, 1974), all but exponentially few sequences of a given length have near maximal Kolmogorov complexity. Since there can only be less than 2n+1 programs of length less than or equal to n, then by the pigeonhole principle, there are at most 2n+1 distinct bitstrings x with K(x) ≤ n. Equivalently, drawing a random bitstring x of length n, there is at most a 21−k chance that K(x) ≤ n−k. As this probability decays exponentially in k, this bound shows that the vast majority of bitstrings of length n have complexity close to n. One way of upper bounding K(x) is to compress x, but then we must include both the size of the compressed file and the size of the program required to decompress it. Alternatively, if we can construct a short program which directly outputs x, this program also forms a compression of x. The above discussion then demonstrates that the vast majority of bitstrings do not admit a non-trivial compression.
Notions of Kolmogorov complexity and simplicity bias have also been discussed in the context of universal induction algorithms and machine learning more broadly. Notably, one can use Kolmogorov complexity to construct the Solomonoff prior (Solomonoff, 1964), P (x) = 2−[K(x)+2 logK(x)]/Z (with Z < 1), a formalization of the principle of Occam’s razor favoring simplicity. One can perform inference using just this prior via universal induction algorithms, and we discuss these matters in more detail along with complexity in deep learning in Appendix A.
PAC-Bayes compression bounds. Generalization bounds limit how a model’s expected risk R(h) will differ from its train risk R̂(h). PAC-Bayes generalization bounds (McAllester, 1998; 2003; 2013; Alquier, 2021) can be considered a generalization and improvement of the finite hypothesis bounds (Mohri et al., 2018) which provide a generalization guarantee even when the number of hypotheses is large (or even infinite) so long as the given hypothesis is likely under a prior P (h). The simplest version of the bound (McAllester, 1999) applied with a point mass posterior Q(h) = 1[h=h∗] states that
R(h) ≤ R̂(h) + √
− logP (h) + log(n/δ) + 2 2n− 1 (1)
with probability at least 1 − δ, where n is the size of the training set. Choosing the Solomonoff prior P (h) = 2−[K(h)+2 logK(h)]/Z as done in Lotfi et al. (2022b) and noting that the normalization constant Z ≤ 1, the prior likelihood of a given hypothesis can be bounded via
− log2 P (h) ≤ K(h) + 2 log2 K(h) ≤ C(h) + 2 log2 C(h) for any compression scheme C. Combining these two bounds provides a theoretical basis for the PAC-Bayes compression approach explored in other work (Zhou et al., 2018; Lotfi et al., 2022b) and can be used to derive nonvacuous generalization bounds even for large neural networks when using a strong compression scheme.
3 STRUCTURE IN REAL-WORLD DATA
The often cited no free lunch theorem of Wolpert (1996) states that all learners perform the same when averaged over a uniform distribution on all possible datasets. However, the assumption of uniform samples is very strong and extremely unnatural. This assumption subtly selects high complexity incompressible data, a set of problems on which learning is fundamentally impossible. Through means of hypothesis test we are able to conclusively rule out the possibility that real datasets are drawn from the uniform distribution, and therefore assertions that the NFL applies to problems of interest. We bring to the surface the centrality of incompressibility in NFL theorems by deriving a new NFL theorem directly from ideas of data compression and Kolmogorov complexity.
3.1 AN EXERCISE IN BOUNDING THE COMPLEXITY OF DATASETS
We first consider the hypothesis that unlabeled machine learning datasets are drawn uniformly at random and use a bound on the Kolmogorov complexity as a test statistic. Using bzip2, and including the size of the compression program, we compress text dataset Amazon Review Full (McAuley & Leskovec, 2013) and audio dataset LibriSpeech (Panayotov et al., 2015) to 393.2 MB and 8.36 GB respectively, providing upper bounds on the Kolmogorov complexity with respect to the python programming language. Supposing the datasets were in fact uniformly randomly sampled datasets, the probability of observing complexities this size or smaller is less than 2−2 32 and 2−2 36
, astronomically low p-values, conclusively ruling out the possibility that they were sampled in this way. If we randomly shuffle the datasets, we instead obtain bounds of only 836.7 MB and 9.69 GB, considerably larger, showing that the compressibility results not just from an inefficient encoding, but from structure in the dataset. Other works have also examined Kolmogorov complexity in data, for example EEG patterns (Petrosian, 1995) or animal behavior (Zenil et al., 2015), and likewise confirm that such naturally occurring data is simple.
3.2 NEURAL NETWORKS AS COMPRESSORS OF THE LABELING FUNCTION
More relevant to the context of supervised learning, we show that not only are the unlabeled datasets compressible—the labeling functions are too. Further, this can be demonstrated using trained models as compressors. Given a labeled dataset D = (X,Y ) = {(xi, yi)}ni=1, any likelihood model p(y|x)—regardless of whether the top predictions are correct—can be used to generate a lossless compression scheme to encode the dataset labels Y given the inputs X . Using a stream code such as arithmetic coding (Witten et al., 1987) in combination with the probability model p(y|x), the labels can be encoded in K(Y |X, p) ≤ −∑ni log2 p(yi|xi) + 3 bits. Models which maximize the log likelihood of the data also implicitly minimize the length of this encoding.
One can show by delimiting, as is done in Fortnow (2000), that K(Y |X) ≤ K(Y |X, p) +K(p) + 2 log2 K(p) + c, where c is some small constant depending on the language. Writing the negative log likelihood in terms of the empirical cross entropy, combining our two inequalities, and dividing by the size of the dataset n yields
1 nK(Y |X) ≤ CE/ln 2 + n−1(K(p) + 2 log2 K(p) + c), (2)
where CE is the cross entropy of the classifier p averaged over dataset D. This implies that, regardless of how large the model is—so long as CE is better than random guess—if the size n of the dataset is large enough, the model provides a non-trivial compression of the dataset. To demonstrate this fact, we employ the compression scheme from Lotfi et al. (2022b) in order to find a compressed representation of MLPs on several class balanced tabular classification datasets1. As shown in Figure 2 (middle), we are able to compress the labels on most of the datasets by well over the naive n log2 C encoding length where C is the number of classes. We also apply the method with convolutional architectures to compress labels on CIFAR-10 and CIFAR-100 in Figure 2 (right), allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence.
3.3 A KOLMOGOROV-STYLE NO FREE LUNCH THEOREM
A corollary of Equation 2 is that if the dataset is incompressible, then no model can do better than random chance in the limit of a large dataset. Since compressible datasets in uniformly sampled data are exponentially unlikely, we can prove our own version of the no free lunch theorem directly from this constraint on complexity. With very high probability, on any given uniformly sampled dataset, learning is impossible. Theorem 1. Let (X,Y ) be a labeled dataset with n data points and uniformly sampled random labels from C classes. Then, with probability at least 1− δ, for every classifier p(y|x),
CE(p) ≥ lnC − ln 2 n
( K(p) + 2 log2 K(p) + log2 1
δ + c
) (3)
where CE(p) is the empirical cross entropy of the classifier p(y|x) on the data. Thus for any model of bounded size, if the size of the dataset is large enough, the model cannot represent any classifier with cross entropy appreciably smaller than that attained from random guess.
Proof: See Appendix B
Like any of the no free lunch theorems, this initially seems limiting, but here we see that the incompressiblility of the datasets from the uniform distribution is the fundamental issue. On compressible datasets (ones with less than maximal complexity), learning is possible.
Takeaway: Real datasets are highly unlike the high complexity samples from the uniform distribution where learning is impossible.
4 LOW-COMPLEXITY BIAS IN MACHINE LEARNING MODELS
In the previous section, we saw that real-world data distributions across domains are highly structured and therefore have low Kolmogorov complexity. If we can construct models which prefer the same low-complexity data, then we can hope to perform inference with a single model across many such domains. While early machine learning systems incorporated highly domain-specific design elements, such as handcrafted image features (Dalal & Triggs, 2005) or graphical models for language modeling (Mnih & Hinton, 2007), modern neural network architectures across domains are converging on transformers (Vaswani et al., 2017; Dosovitskiy et al., 2020; Gulati et al., 2020; Somepalli et al., 2021), some of which can simultaneously achieve impressive performance on a variety of data types with a single architecture (Jaegle et al., 2021).
In this section, we argue that neural networks have a generic simplicity bias that extends well beyond the datasets for which they are designed. To this end, we: (1) feed tabular datasets from diverse
1https://www.openml.org/
domains such as click prediction and airline delay prediction into neural networks designed specifically for computer vision and find that they prefer naturally occurring labelings to random ones, (2) formulate a language with respect to which we can measure the Kolmogorov complexity of numerical sequences and observe that GPT-3 generates low-complexity sequences with exponentially higher probability than complex ones, (3) predict the next term in a sequence with randomly initialized language models. Whereas the no free lunch theorem of Wolpert (Wolpert, 1996) ensures that such an inference procedure cannot outperform a random guess on average, we find that randomly initialized neural networks prefer sequence completions which generate low-complexity completed sequences, demonstrating that they can make accurate guesses as long as the ground truth sequence distribution also favors low complexity.
4.1 NEURAL NETWORKS PREFER NATURALLY OCCURRING LABELINGS ACROSS DOMAINS
The inductive biases of even specialized architectures like convolutional neural networks facilitate broad learning abilities. To demonstrate this fact, we take tabular classification datasets and encode the tabular features as an image by simply forming images with one channel so that each pixel corresponds to a different feature, and padding the edges with zeros as needed. We train a small 9 layer convolutional network using this input data to predict the classification labels. Since the data has no local or translation equivariant structure, learning with the convolutional network requires breaking these structures. Even in spite of this mismatch, the convolutional networks perform much better than random chance. Furthermore, using the PAC-Bayes compression methodology from Lotfi et al. (2022b), we are able to find a compressible set of parameters providing nonvacuous generalization bounds on the model’s performance, as shown in Figure 2. Despite violating the locality and translation equivariance inductive biases of the architecture, the model is still highly compressible when fitting the tabular data and provably generalizes, suggesting that a core part of these inductive biases are general and shared with this tabular data.
Takeaway: While architectures such as CNNs have inductive biases tailored for specific problems, much of their inductive bias is general, extending across domains.
4.2 GPT-3 ASSIGNS EXPONENTIALLY HIGHER PROBABILITY TO SIMPLER SEQUENCES
We now study the preference of GPT-3—a line of high-performance autoregressive text generators— has for simpler sequences. The ability of language models to solve reasoning problems has recently been studied by Zelikman et al. (2022), who develop an effective prompting framework, and d’Ascoli et al. (2022), who develop transformers for predicting symbolic expressions directly from the sequence. To perform our own study, we need a well-defined, computable notion of complexity. We thus define a simple, non-Turing-complete language and measure Kolmogorov complexity with respect to this simple language. Namely, we generate integer sequences with binary expression trees. We then define the complexity of a sequence as the size of the smallest expression tree, measured as the number of internal nodes—or equivalently the number of operators in the expression represented by the tree—that generates that sequence. By using a small set of terms for the leaves and binary operators for the nodes, we can enumerate over all possible expression trees for sufficiently small sizes at most L and compute all sequences with complexities 0 through L.
In our experiments, we use operations +,×, and //, where // denotes integer division. For leaves, we use 2 and i, where i is the index within the sequence. For example, (2 + i) × i could be implemented with a tree of size 2 and would generate the sequence ai = 0, 3, 8, 15, ... Using this setup, we are able to generate sequences of varying complexity, according to a well-defined metric, and quantify the preference of GPT-3 models for simpler sequences over more complex ones. We provide details on how we tokenize sequences and extract their probabilities in Appendix D.
In Figure 3, we measure the average log-probability GPT-3 models assign to sequences of a given Kolmogorov complexity, where we fix the number of numerical tokens input into the model to be 30, and we observe that the probabilities assigned by these language models decrease exponentially with sequence complexity, similar to the Solomonoff prior discussed in Section 2. In contrast, a uniform prior would be described by a flat line. Note that for very complex sequences, we cannot easily measure their minimum description length, so we limit our experiments to expression trees with at most 7 operators. In this low-complexity regime, we observe that bigger GPT-3 models which are better at language modeling, such as Davinci which contains 175 billion parameters, assign higher probability to these simple sequences than much smaller GPT-3 models such as Ada. The legend lists GPT-3 variants in order of increasing size.
We can also examine the decay of such log-probabilities as we feed more tokens, corresponding to digits of sequence elements, into the model. As the sequences get longer, we see in Figure 3 that the probabilities assigned to sequences decay sub-exponentially, indicating that these models, especially bigger variants such as Davinci, become more and more confident about later sequence elements.
1
4.3 EVEN RANDOMLY INITIALIZED LANGUAGE MODELS PREFER LOW COMPLEXITY
The previous section solely examined pre-trained language models, but these models have been trained on massive corpora of data. Do they prefer low complexity before they have even seen any data at all? While initializers of such models follow highly diffuse distributions over parameters, such random parameters can induce highly structured functions.
Trained language models are known to repeat themselves when generating text (Holtzman et al., 2020; Fu et al., 2021). One might think that this behavior is learned from training data
which often contains repeated text, but we show in this section that randomly initialized GPT models repeat themselves too. Interestingly, we can formalize the preference for repetition as a preference for low Kolmogorov complexity. In order to disentangle the impact of initialization from training,
we adopt a simple language for generating binary sequences under which we can quickly measure Kolmogorov complexity. We consider a program to be a bitstring, and then the program upon execution simply repeats the bitstring until output reaches length 10. Under this language, the sequence 0, 0, 0, ... has Kolmogorov complexity 1, and 0, 1, 0, 1, ... has complexity 2, yet randomly generated sequences are exponentially more likely to have high complexity. We conduct our evaluations exhaustively on all such sequences of length 10.
We now generate sequences of length 10 with randomly initialized GPT-2 language models (Radford et al., 2019), such that each initialization is used to generate one sequence, and we measure the frequency with which each such sequence is generated. The estimated generation probabilities by Kolmogorov complexity are found in Figure 4, where we see again that low-complexity sequences are assigned exponentially higher probabilities. Here, we see that pre-trained checkpoints exhibit an even stronger preference for low complexity as they are trained on structured text. We can also use the randomly initialized language models to perform next element prediction by estimating the probabilities they assign to the next element in a sequence given having correctly generated the previous terms. While Wolpert’s no free lunch theorem (Wolpert, 1996) ensures that the average completion accuracy over all possible length 10 bitstrings is exactly 0.5, we verify in Figure 4 that randomly initialized networks can be used for sequence completion when the sequence is of low complexity. Additional details and experiments with other GPT-2 architecture variants are found in Appendix E.
We can further generate very long length-100 sequences with randomly initialized and also pretrained GPT-2 models and run a simple hypothesis test, demonstrating both randomly initialized and pre-trained models generate lower Kolmogorov complexity sequences on average than a uniform distribution. Over 100,000 samples from each of these three generative distributions and with a onetailed t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), where SGPT and SU respectively denote random sequences generated by the language model or a uniform distribution, we reject this null hypothesis in both randomly initialized and pre-trained models with an extremely low p-value, indicating that language models are indeed more likely to generate simple sequences, pretrained models even moreso. Further details can be found in Appendix E. We conclude that neural networks for language generation, both trained and randomly initialized, express a bias towards low Kolmogorov complexity which mirrors that of data as demonstrated in Section 3.
Takeaway: Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity.
5 MODEL SELECTION WITH A PREFERENCE FOR SIMPLICITY
In typical industrial workflows, practitioners examine their data and select an appropriate learner. For example, if data are linearly separable, then a practitioner might use logistic regression, whereas if the data is more complex, the same practitioner may instead choose a neural network. We can then consider the human model selector and the model they select as a single meta-learner. While early works conjectured that automated meta-learners are ruled out by no free lunch theorems (GiraudCarrier & Provost, 2005), subsequent works show that model selection can in fact be automated in practice (Vilalta & Drissi, 2002). Giraud-Carrier & Provost (2005) shows that with minimal assumptions, the defeating conclusion of the no free lunch theorem can be escaped by meta-learners. Subsequent works prove the existence of meta-induction algorithms, for selecting amongst all induction methods, which are near-optimal among learners (Schurz, 2008; Schurz & Thorn, 2017). In this section, we argue why in principle, model selection can be automated from the view of complexity.
5.1 MODEL SELECTION AND GENERALIZATION BOUNDS
When developing a machine learning approach for a given application, it is often helpful to leverage domain knowledge in constructing or choosing the right model for the task. One might start by choosing from architecture families like MLPs, CNNs, GNNs, PointNets, or Transformers and then decide on the appropriate way of featurizing the inputs, possibly incorporating knowledge of data symmetries in via hard-coded equivariances or data augmentations. Even just enumerating the possible models a practitioner would consider and choosing based on their expertise and prior knowledge, the number is not tremendously large. Even if we are extremely generous and suppose that the prac-
titioner is choosing from 100 million models, we can consider the impractical algorithm of selecting between them via cross validation. While one might expect that such a procedure would overfit, in fact even finite hypothesis bounds show this is not the case. Using cross validation on a validation set of size n = 20000 for a classification problem, plugging in a uniform prior P (h) = 10−8 to Equation 1, we get that the gap between validation and test error will be less than 2.9% with probability greater than 99%. Ultimately, this is the case because we only need a number of data points proportional to the log of the size of the hypothesis space for finite hypothesis bounds.
Considering even a more general class of models, one may consider the number of bits of prior knowledge needed to specify the model architectures like MLPs, CNNs, or GNNs as well as symmetry groups and any other required information. In each of these cases, the model architectures can in fact be expressed with a small number of bits. A near state-of-the-art computer vision model can be expressed in only 280 characters (Trockman & Kolter, 2022) of PyTorch. Similarly, important symmetry groups like translations, rotations, reflections, and other matrix groups can be expressed in only a few lines of code (Finzi et al., 2021) and can be used to encode equivariances or used for augmentation. Therefore, even in selecting from all possible models that can be expressed in that short amount of code, we can expect to generalize with only tens of thousands of data points.
Takeaway: In principle, automating model selection directly via cross validation provably generalizes well across millions of models with only thousands of data points.
5.2 ONE MODEL FOR BIG AND SMALL TRAINING SETS
It is commonly believed that small training datasets demand compact neural network architectures, whereas large training datasets can accommodate flexible architectures. Accordingly, practitioners hand select appropriate models for their datasets. In this section, we argue that a single learner can be effective for all data sizes as long as we encode our a priori preference for simplicity. Our prior should prefer simple functions we believe are more likely yet also support a wide variety of functions in case training data rules out our preferences. We begin with a simple illustration.
Polynomial regression. Common intuition dictates that high degree polynomials easily overfit their training data and should be avoided on small training sets. In contrast, low degree polynomials cannot fit complicated functions so they should be avoided when training data is plentiful. However, we demonstrate here that we can rely on lower order terms insofar as they can explain the data while still allowing higher order terms if they are necessary.
To this end, we adopt Tikhonov regularization with a Tikhonov matrix given by diag({αk2}dk=0), that is we impose an ℓ2 penalty which increases quadratically with the order of the corresponding monomial. We consider three models: non-regularized degree 2 and degree 10 polynomials as well as a degree 10 polynomial regularized as mentioned above. In Figure 5, we perform regression on the cosine target function (left) and degree 2 (center) and degree 10 (right) polynomial target functions. We confirm that even though a high degree polynomial model performs poorly with few training samples, it eventually outperforms its lower degree counterpart. Meanwhile, a high degree polynomial model with the above Tikhonov regularization outperforms both non-regularized models across most sample sizes on cosine and degree 10 polynomial regression problems, and it notably
matches the performance of a degree 2 model even on data generated by a degree 2 polynomial. We include details regarding the example target functions and the data sampling process in Appendix F.
Neural networks. We illustrate a similar concept with neural networks. We consider a small network, GoogLeNet (Szegedy et al., 2015), which performs well on small datasets such as CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) but poorly on larger datasets like ImageNet (Deng et al., 2009). We also consider a large network, ViT-B/16 (Dosovitskiy et al., 2020), which performs significantly worse on CIFAR variants but much better on ImageNet. As in the polynomial regression example above, we can combine these two architectures, specifying our preference for the simpler GoogLeNet to the extent that it fits the training data. To learn on a dataset, we train both models and then take a convex combination of their logits, c ∗ logitsViT + (1 − c) ∗ logitsG, controlled by a single parameter c with ℓ2 regularization in favor of GoogLeNet. In Figure 6, we observe that while GoogLeNet and ViT each have strengths
and weaknesses, combining them with a preference for simplicity achieves the best of both worlds. While aggressively restricting our architectures can decrease computational cost, it is unnecessary for generalization. Details and additional experiments with Swin Transformer (Liu et al., 2021) are found in Appendix F.
6 DISCUSSION
While large ML models are highly flexible, we saw in this work that they reliably prefer low Kolmogorov complexity solutions—aligning well with relevant learning problems—despite not being designed with complexity in mind. This observation raises the question: why exactly do neural networks encode such a strong preference for low complexity and how can we tune this preference? Complementing the above observation, we also saw that a single expressive model which simultaneously supports a wide variety of solutions but prefers simple ones can simultaneously solve simple and hard problems over sample sizes. Such learners present clear advantages over the current paradigm in deep learning in which we manually select small constrained architectures or large ones with mild inductive biases, depending on the problem. Keeping this possibility in mind, how can we design expressive yet simplicity-biased models but with affordable computational costs?
A UNIVERSAL INDUCTION AND COMPLEXITY IN DEEP LEARNING
Universal induction. Inspired by Kolmogorov complexity, a line of work introduces universal induction methods, which prefer low complexity answers (Solomonoff, 1964; Hutter, 2000; Nakkiran, 2021). Notably, Solomonoff induction (Solomonoff, 1964; Rathmanner & Hutter, 2011) places a probability distribution over bitstrings which vanishes exponentially with their Kolmogorov complexity with respect to a universal Turing machine. Then, presented with an observed sequence (e.g. training data) called a prefix and a candidate completion, we can perform inference via p([prefix, completion] | prefix). In other words, we assign probability to each completion proportional to its probability among sequences which begin with the prefix. Since low-complexity bitstrings are assigned higher probability, we will prefer completions which result in low-complexity completed sequences. Subsequent literature argues that Solomonoff induction explains why no free lunch theorems are practically irrelevant, namely weak assumptions on the structure of learning problems allow for near-optimal universal learners (Lattimore & Hutter, 2013).
Complexity in deep learning. Several works have related Kolmogorov complexity to neural networks. Pearlmutter & Rosenfeld (1990) argues that random initialization and noise in data increase the complexity of neural networks but that ensembling such models reduces complexity in expectation. Schmidhuber (1997) proposes another method for reducing Kolmogorov complexity by instead explicitly searching for simple neural networks and finds improvements in generalization on very small problems where such a search is computationally feasible. Another line of study proves that multi-layer perceptrons with boolean input features are biased towards low-entropy functions, namely ones which classify disproportionately many or few points into the same class (Mingard et al., 2019) or are insensitive to flips in the boolean features (De Palma et al., 2019).
Other notions of simplicity have also been proposed and applied to neural networks, for example involving the implicit bias of SGD (Damian et al., 2021); the connection between flat loss minima with wide-margins decision boundaries and improved generalization (Huang et al., 2020; Izmailov et al., 2018); the tendency of models to rely on simple, compressible features (Geirhos et al., 2020); or the potential for overparameterized networks to be implicitly regularized, leading to capacity control (Neyshabur et al., 2014). The marginal likelihood, or the probability that a random sample from the prior generates the data, is a natural statement of Occam’s razor or the idea that simple models should be preferred (Lotfi et al., 2022a). Diffuse priors which spread mass out around the diverse high-complexity solutions are unlikely to generate observed data, so effective models admit a far higher marginal likelihood on naturally occurring labelings compared to random ones (Wilson & Izmailov, 2020).
B A KOLMOGOROV NO FREE LUNCH THEOREM PROOF
Rewriting Equation (2), we have
CE(p) ≥ ln 2 n (K(Y |X)−K(p)− 2 log2 K(p)− c) .
Note that by simply counting all possible programs taking input X , there are less than 2k+1 labelings Y with K(Y |X) ≤ k. Note that there are Cn distinct labelings, from which we are drawing uniformly. So that
P(K(Y |X) > n log2 C −m) = 1− P(K(Y |X) ≤ n log2 C −m) ≥ 1− P(K(Y |X) ≤ ⌈n log2 C⌉ −m)
≥ 1− 2 ⌈n log2 C⌉−m+1
Cn
≥ 1− 22−m. Alternatively, with probability at least 1− δ,
K(Y |X) > n log2 C − log2 1
δ − 3.
Thus, with probability at least 1− δ, we have for every classifier p,
CE(p) ≥ lnC − ln 2 n (K(p) + 2 log2 K(p) + log2 δ + c) .
C PAC-BAYES COMPRESSION EXPERIMENTAL DETAILS
For the OpenML tabular classification datasets, we preprocess them first by balancing the classes, subsampling all classes down to the number of examples of the least likely class. This way, when compressing the datasets, any result achieved is nontrivial in contrast with a very class imbalanced dataset. We heavily follow the compression method of (Lotfi et al., 2022b), including the small 9 convolutional architecture which they use to generate their bounds. When cramming the tabular data into this convnet, we combine numerical features with one hot encoded categorical features and then pack these into the pixels of a 1 channel image, using however large an image as necessary to fit each of the different features inside.
With respect to the sizes of the random subspaces that we train the compressed models in, we consider 250,500,1000, and 2000. For tabular label compression, we employ a 2 hidden layer MLP with hidden dimension k = 192, and we consider the same 250,500,1000, and 2000 values for subspace dimension. We train for 80 epochs with 20 epochs of quantization at a batch size of 512 using Adam at lr= 3×10−4. For image classification label compression, we use the 9-layer convnet with subspace dimensions 2000, 3000, 5000, and we train for 80 epochs using SGD at learning rate 0.1 and quantize for the remaining 20 epochs, at a batch size of 50. For calculating the length of the code for model architecture and decompressor, we need only the implementation of the model, the arithmetic decoder, and the loading of the quantized values. Removing wasted bits, we minified the python file, leading to a size of approximately 2.5KB.
D GPT-3 EXPERIMENTAL DETAILS
To feed sequences into a model, we split up sequence elements into individual byte-pair encoding tokens corresponding to their decimal digits, and we place comma tokens between sequence elements as delimiters, also beginning every input with an <|endoftext|> token. We choose to use the byte-pair encoding of decimal digits with a space inserted before the digit, e.g. ‘ 0’ as this is known to enhance the ability of language models to perform arithmetic (Zelikman et al., 2022). For example, the sequence 10, 11 will be split up into [‘<|endoftext|>’, ‘ 1’, ‘ 0’, ‘,’, ‘ 1’, ‘ 1’], and each element of the list is tokenized individually. Then, the log-probability of a sequence is given by the sum of the log-probabilities corresponding to the correct decimal digits in their respective slots of the model’s output. Note that various sequences will contain different numbers of decimal digits, and the sequence’s log-probability will decrease with every token. Therefore, in order for fair comparison, we limit all sequences to 30 decimal digit tokens and truncate there.
E SEQUENCE GENERATION AND COMPLETION WITH RANDOMLY INITIALIZED LANGUAGE MODELS
For these experiments, we use Huggingface2 GPT-2 architectures and pre-trained checkpoints. In order to estimate the probabilities assigned by randomly initialized language models to each bitstring, we generate one million random sequences, ensuring that many instances of each bitstring are generated as there are only 210 = 1024 bistrings of length 10.
We also include plots with other sizes of GPT-2 architectures in Figure 7 and Figure 8.
We also include the hypothesis test referenced in Section 4.3. For this experiment, we generate 100,000 length-100 sequences from randomly intialized GPT-2 variants, pre-trained GPT-2 variants, and a uniform distribution. We then perform a one-sided t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), for both initialized and pre-trained models. Table 1 contains the resulting sample means, t-statistics and p-values. In all cases, we reject the null hypothesis with very low p-values, indicating that language models do prefer to generate low-complexity sequences. Notably, pre-trained language models exhibit an increased simplicity bias, and bigger and better language models even moreso.
2https://huggingface.co/
F BIG AND SMALL TRAINING SETS
Polynomial regression. We choose three example target functions on which to perform regression: cos( 3π2 x), x
2, and −36x + 49x5 − 14x7 + x10. Training data is randomly drawn from a uniform distribution over the unit interval, and we add noise to training labels from N (0, 0.1). In each case, for each dataset size, we average the mean squared error over 100 randomly sampled training sets. For Tikhonov regularized polynomial regression on the cosine and degree 2 polynomial target functions, we use α = 0.01, and we use α = 0.001 for regression on the degree 10 polynomial target function.
Image classification with neural networks. For ImageNet trained models, we employ publicly available checkpoints from torchvision3. We train models on CIFAR-10 and CIFAR-100 for 200 epochs with initial learning rate 0.1 and cosine annealing along with horizontal flip and random crop augmentations. We use SGD with momentum 0.9 and batches of size 128. All CIFAR images are rescaled to 224 × 224 so that we can use an identical model for ImageNet and CIFAR data. In order to learn the parameter c controlling the convex combination of models, we perform 10 epochs of training, where the models’ parameters are frozen, and we apply weight decay with coefficient 10−5. We learn the parameter c using SGD with momentum 0.9 and batch size 128, initial learning rate 0.1, and cosine annealing. | 1. What is the focus of the paper regarding neural network models and their preference for certain types of data?
2. What are the strengths and weaknesses of the paper's approach to explaining generalization in deep learning and improving model selection?
3. Do you have any concerns or reservations about the paper's methodology or conclusions?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors argue that practical neural network models prefer low-complexity data (in the sense of Kolmogorov complexity). They show that common architectures are compressors for labeling functions even in tasks that seem unrelated (e.g., CNNs on tabular data). Also, both pre-trained and random language models prefer to generate low-complexity sequences. Since data in practice have low complexity, the authors argue that the bias of models towards low-complexity data may explain generalization and help model selection. They argue that a single learner which can express complex functions but prefers simple solutions can be used for a wide range of problems.
Strengths And Weaknesses
The paper considers important questions, such as explaining generalization in deep learning and improving model selection. However, I don’t think that the insight in this paper is significant.
Regarding the main theme of explaining the success of deep learning by showing that practical models prefer low Kolmogorov-complexity solutions. I agree that explaining generalization by understanding simplicity bias is a promising direction. The notion of simplicity bias is not new and has been extensively studied in recent years (although not in the context of Kolmogorov complexity). In order to convince that Kolmogorov complexity is the right tool for studying simplicity bias, I think that stronger and more quantitative claims should be shown. Namely, showing that the complexity of practical models is much lower than the complexity of the uniform distribution is a rather weak claim. It is not clear whether such observations are even close to implying any meaningful generalization bound.
Finally, the idea of using a complex architecture but to encourage simplicity (e.g., by using regularization) is clearly not new. This idea is widely used both in practice and in theoretical papers.
Clarity, Quality, Novelty And Reproducibility
The authors make effort to highlight the main ideas, which improves the clarity. However, as I discussed above I think that both the quality of the results and the novelty are not high. |
ICLR | Title
Real Data Distributions Prefer Simplicity and So Do Our Models: Why Machine Learning and Model Selection Are Possible
Abstract
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
N/A
No free lunch theorems for supervised learning state that no learner can solve all problems or that all learners achieve exactly the same accuracy on average over a uniform distribution on learning problems. Accordingly, these theorems are often referenced in support of the notion that individual problems require specially tailored inductive biases. While all but exponentially few uniformly sampled datasets have high complexity, we argue that neural network models share the same preference for low-complexity data that we observe on real-world problems. Notably, we show that architectures designed for a particular domain, such as computer vision, are compressors for labeling functions on a variety of seemingly unrelated domains. From our experiments, we see that pre-trained and even randomly initialized language models prefer to generate low-complexity sequences and can therefore be used for inference. In principle, the use of expert knowledge and bias for simplicity of human practitioners could be folded into the learning algorithm, automating design and selection of models. We explain how typical areas requiring human intervention such as picking the appropriately sized model when labeled data is sparse or plentiful can be automated into a single learning algorithm. These observations help justify the trend in deep learning of unifying seemingly disparate problems with an increasingly small set of machine learning models.
1 INTRODUCTION
The problem of justifying inductive reasoning has plagued epistemologists since at least the 1700s (Hume, 1748). More recently, in the late 1990s, no free lunch theorems emerged from the computer science community as rigorous versions of arguments showing the impossibility of induction in contexts seemingly relevant to real machine learning problems (Wolpert, 1996; Wolpert & Macready, 1997). One such no free lunch theorem for supervised learning states that no single learner can solve every problem (Shalev-Shwartz & Ben-David, 2014). Another states that, assuming a world where the labeling functions of learning problems are drawn from a uniform distribution, no learner can reliably perform inference at all (Wolpert, 1996). Such a world would be altogether hostile to inductive reasoning. The assumption that labelings are drawn uniformly ensures that training set labels are entirely uninformative about unseen samples.
In contrast to this dismal outlook on machine learning, naturally occurring inference problems involve highly structured data. If we can design learning algorithms with inductive biases that are aligned with this structure, then we may hope to perform inference on a wide range of problems. In this work, we explore structure in both real-world data and modern machine learning models, primarily through the lens of Kolmogorov complexity.
The Kolmogorov complexity of some output is defined as the length of the shortest program under a fixed language that produces that output. In Section 3, we explain the connection between Kolmogorov complexity and compressibility. We note that virtually all random data cannot be significantly compressed, yet relevant datasets are highly compressible and hence substantially less complex. In particular, neural networks themselves can be used to create compression of data labelings, upper bounding their Kolmogorov complexity.
Complementing the low complexity of actual data, we then demonstrate in Section 4 that modern neural networks prefer low Kolmogorov complexity too. While models implemented on a computer provably cannot generate data with complexity exceeding the length of their associated program, we find they actually prefer data that is far simpler. We formulate simple languages for generating numerical sequences, under which we can directly measure the Kolmogorov complexity of a sequence. We use these languages to inspect the simplicity bias of both pre-trained and randomly initialized language models. GPT-3 (Brown et al., 2020) reliably favors less complex sequences, and bigger and better GPT-3 variants even more so. Notably, randomly initialized GPT models share this simplicity bias. As such, we can use them to predict the next element in a sequence as long as the target sequence is low-complexity. To further emphasize the universality of this simplicity bias, we cram tabular data from diverse domains, including click prediction and airline delay prediction, into convolutional computer vision architectures and show that these vision architectures prefer correct labelings to random ones, even on data which do not remotely resemble natural images. We use this property to compute cross-domain non-vacuous PAC-Bayes generalization bounds.
A common intuition associated with no free lunch theorems dictates that since a single learner cannot solve all problems, practitioners must inspect data and manually select an appropriate learner for the specific problem at hand. For example, a practitioner might select a more constrained model to avoid overfitting on small datasets, or they may select convolutional architectures to accommodate natural image data. To the contrary, we explain in Section 5 why model selection can be automated from the standpoint of PAC-Bayes theory, namely selecting between learning algorithms bears negligible complexity cost, and this cost is quickly overcome by gains in validation accuracy under reasonably sized validation splits. Moreover, a single learner, which on the one hand supports a variety of functions but on the other hand prefers simple ones, can solve a wide range of problems. We show that flexible models accompanied by a penalty which encourages simple solutions can solve problems with a variety of sample sizes. For example, a single learner which combines a small convolutional neural network and a transformer, while encouraging use of the small model insofar as it fits the training data, achieves strong performance both on small datasets like CIFAR-10 (Krizhevsky, 2009) and also large datasets like ImageNet (Deng et al., 2009). We highlight in Figure 1 that the historic evolution of machine learning systems supports the ability of a single learner to perform diverse tasks as highly task-specific pre-neural algorithms, such as Latent Dirichlet Allocation (Blei et al., 2003) and HOG (Dalal & Triggs, 2005), were replaced by neural architectures such as convolutional or recurrent models, and transformers can now handily perform all tasks listed.
Our contributions are summarized as follows:
• We demonstrate the direct connection between compressibility and learnability that is implicit in no free lunch theorems by deriving a new NFL theorem using Kolmogorov complexity.
• We show that the low Kolmogorv complexity of real datasets can be directly derived from the machine learning models used to fit them.
• We provide strong empirical evidence that neural networks have low complexity biases that are relevant even on data far from what they were designed for. To this end, we compute generalization bounds using convolutional vision models on tabular data, we showcase GPT-3’s strong preference for sequences generated by shorter expression trees, and we find that even randomly initialized language models have a simplicity bias.
• We argue why in principle model selection can be automated into a larger algorithm, and we provide explicit examples demonstrating how multiple models can be subsumed into a single powerful learner.
2 BACKGROUND
No free lunch theorems. No free lunch theorems (NFL) state that without making strong assumptions, a single algorithm cannot simultaneously solve all problems well. No free lunch theorems for search and optimization indicate that all optimizers and search algorithms satisfying certain conditions perform exactly the same on average over all such search or optimization problems (Wolpert et al., 1995; Wolpert & Macready, 1997). In this work, we narrow our focus to NFL for supervised learning. Wolpert (1996), and similarly Schaffer (1994), proves an analogous theorem for supervised learning under which every learner—a function that takes in labeled data and outputs a
labeling function for the associated domain—achieves exactly the same accuracy of 50% on average over all binary classification problems where accuracy is only evaluated on unseen samples.
In order to prove no free lunch theorems, one needs to place very strong assumptions on the lack of structure in the labeling function such as a uniform distribution, so that conditioning on the training labels does not modify the probability over labelings on unseen points (Rao et al., 1995). To illustrate the severity of this condition, imagine that a person is presented a sequence of one million 1s and asked whether they predict that the next element will be 1 or 0. If labelings of sequence elements were distributed uniformly, then we should assign equal probability to both options, despite the fact that intuition and Bayesian probabilistic models tell us overwhelming to favor 1.
Shalev-Shwartz & Ben-David (2014) instead do not assume a particular distribution over learning problems and prove that for every learner, there is a task on which the learner achieves poor accuracy with high probability over training splits, whereas another learner solves the same problem with perfect accuracy. Notably, the latter NFL computes accuracy over all data, not just “off-training” samples. While this statement of NFL does not require uniformly distributed data, if the existence of catastrophic failure modes for our learners is to be damning, our learners must encounter them in practice. After all, we do not care if our learners cannot solve problems we do not want to solve. Thus, the practical relevance of this theorem again hinges on the distribution over real-world learning problems and how well it aligns with the inductive bias of a learner. In this paper, we argue that the real-world learning problems we care about are highly structured, and the inductive biases of neural networks are well-aligned with such problems.
Kolmogorov complexity and compression. Kolmogorov complexity is a quantitative measure of complexity. For any fixed programming language L, the Kolmogorov complexity of data x, K(x), is the length of the shortest program in that language that outputs x (Kolmogorov, 1963). We can similarly define K(y|x) as the length of the shortest program which inputs x and outputs y. For definiteness, we assume that all programs and outputs are finite bitstrings.
Despite the fact that one cannot prove that any particular string has high complexity (Chaitin, 1974), all but exponentially few sequences of a given length have near maximal Kolmogorov complexity. Since there can only be less than 2n+1 programs of length less than or equal to n, then by the pigeonhole principle, there are at most 2n+1 distinct bitstrings x with K(x) ≤ n. Equivalently, drawing a random bitstring x of length n, there is at most a 21−k chance that K(x) ≤ n−k. As this probability decays exponentially in k, this bound shows that the vast majority of bitstrings of length n have complexity close to n. One way of upper bounding K(x) is to compress x, but then we must include both the size of the compressed file and the size of the program required to decompress it. Alternatively, if we can construct a short program which directly outputs x, this program also forms a compression of x. The above discussion then demonstrates that the vast majority of bitstrings do not admit a non-trivial compression.
Notions of Kolmogorov complexity and simplicity bias have also been discussed in the context of universal induction algorithms and machine learning more broadly. Notably, one can use Kolmogorov complexity to construct the Solomonoff prior (Solomonoff, 1964), P (x) = 2−[K(x)+2 logK(x)]/Z (with Z < 1), a formalization of the principle of Occam’s razor favoring simplicity. One can perform inference using just this prior via universal induction algorithms, and we discuss these matters in more detail along with complexity in deep learning in Appendix A.
PAC-Bayes compression bounds. Generalization bounds limit how a model’s expected risk R(h) will differ from its train risk R̂(h). PAC-Bayes generalization bounds (McAllester, 1998; 2003; 2013; Alquier, 2021) can be considered a generalization and improvement of the finite hypothesis bounds (Mohri et al., 2018) which provide a generalization guarantee even when the number of hypotheses is large (or even infinite) so long as the given hypothesis is likely under a prior P (h). The simplest version of the bound (McAllester, 1999) applied with a point mass posterior Q(h) = 1[h=h∗] states that
R(h) ≤ R̂(h) + √
− logP (h) + log(n/δ) + 2 2n− 1 (1)
with probability at least 1 − δ, where n is the size of the training set. Choosing the Solomonoff prior P (h) = 2−[K(h)+2 logK(h)]/Z as done in Lotfi et al. (2022b) and noting that the normalization constant Z ≤ 1, the prior likelihood of a given hypothesis can be bounded via
− log2 P (h) ≤ K(h) + 2 log2 K(h) ≤ C(h) + 2 log2 C(h) for any compression scheme C. Combining these two bounds provides a theoretical basis for the PAC-Bayes compression approach explored in other work (Zhou et al., 2018; Lotfi et al., 2022b) and can be used to derive nonvacuous generalization bounds even for large neural networks when using a strong compression scheme.
3 STRUCTURE IN REAL-WORLD DATA
The often cited no free lunch theorem of Wolpert (1996) states that all learners perform the same when averaged over a uniform distribution on all possible datasets. However, the assumption of uniform samples is very strong and extremely unnatural. This assumption subtly selects high complexity incompressible data, a set of problems on which learning is fundamentally impossible. Through means of hypothesis test we are able to conclusively rule out the possibility that real datasets are drawn from the uniform distribution, and therefore assertions that the NFL applies to problems of interest. We bring to the surface the centrality of incompressibility in NFL theorems by deriving a new NFL theorem directly from ideas of data compression and Kolmogorov complexity.
3.1 AN EXERCISE IN BOUNDING THE COMPLEXITY OF DATASETS
We first consider the hypothesis that unlabeled machine learning datasets are drawn uniformly at random and use a bound on the Kolmogorov complexity as a test statistic. Using bzip2, and including the size of the compression program, we compress text dataset Amazon Review Full (McAuley & Leskovec, 2013) and audio dataset LibriSpeech (Panayotov et al., 2015) to 393.2 MB and 8.36 GB respectively, providing upper bounds on the Kolmogorov complexity with respect to the python programming language. Supposing the datasets were in fact uniformly randomly sampled datasets, the probability of observing complexities this size or smaller is less than 2−2 32 and 2−2 36
, astronomically low p-values, conclusively ruling out the possibility that they were sampled in this way. If we randomly shuffle the datasets, we instead obtain bounds of only 836.7 MB and 9.69 GB, considerably larger, showing that the compressibility results not just from an inefficient encoding, but from structure in the dataset. Other works have also examined Kolmogorov complexity in data, for example EEG patterns (Petrosian, 1995) or animal behavior (Zenil et al., 2015), and likewise confirm that such naturally occurring data is simple.
3.2 NEURAL NETWORKS AS COMPRESSORS OF THE LABELING FUNCTION
More relevant to the context of supervised learning, we show that not only are the unlabeled datasets compressible—the labeling functions are too. Further, this can be demonstrated using trained models as compressors. Given a labeled dataset D = (X,Y ) = {(xi, yi)}ni=1, any likelihood model p(y|x)—regardless of whether the top predictions are correct—can be used to generate a lossless compression scheme to encode the dataset labels Y given the inputs X . Using a stream code such as arithmetic coding (Witten et al., 1987) in combination with the probability model p(y|x), the labels can be encoded in K(Y |X, p) ≤ −∑ni log2 p(yi|xi) + 3 bits. Models which maximize the log likelihood of the data also implicitly minimize the length of this encoding.
One can show by delimiting, as is done in Fortnow (2000), that K(Y |X) ≤ K(Y |X, p) +K(p) + 2 log2 K(p) + c, where c is some small constant depending on the language. Writing the negative log likelihood in terms of the empirical cross entropy, combining our two inequalities, and dividing by the size of the dataset n yields
1 nK(Y |X) ≤ CE/ln 2 + n−1(K(p) + 2 log2 K(p) + c), (2)
where CE is the cross entropy of the classifier p averaged over dataset D. This implies that, regardless of how large the model is—so long as CE is better than random guess—if the size n of the dataset is large enough, the model provides a non-trivial compression of the dataset. To demonstrate this fact, we employ the compression scheme from Lotfi et al. (2022b) in order to find a compressed representation of MLPs on several class balanced tabular classification datasets1. As shown in Figure 2 (middle), we are able to compress the labels on most of the datasets by well over the naive n log2 C encoding length where C is the number of classes. We also apply the method with convolutional architectures to compress labels on CIFAR-10 and CIFAR-100 in Figure 2 (right), allowing us to reject the hypothesis that the labeling functions are drawn uniformly at random with extremely high confidence.
3.3 A KOLMOGOROV-STYLE NO FREE LUNCH THEOREM
A corollary of Equation 2 is that if the dataset is incompressible, then no model can do better than random chance in the limit of a large dataset. Since compressible datasets in uniformly sampled data are exponentially unlikely, we can prove our own version of the no free lunch theorem directly from this constraint on complexity. With very high probability, on any given uniformly sampled dataset, learning is impossible. Theorem 1. Let (X,Y ) be a labeled dataset with n data points and uniformly sampled random labels from C classes. Then, with probability at least 1− δ, for every classifier p(y|x),
CE(p) ≥ lnC − ln 2 n
( K(p) + 2 log2 K(p) + log2 1
δ + c
) (3)
where CE(p) is the empirical cross entropy of the classifier p(y|x) on the data. Thus for any model of bounded size, if the size of the dataset is large enough, the model cannot represent any classifier with cross entropy appreciably smaller than that attained from random guess.
Proof: See Appendix B
Like any of the no free lunch theorems, this initially seems limiting, but here we see that the incompressiblility of the datasets from the uniform distribution is the fundamental issue. On compressible datasets (ones with less than maximal complexity), learning is possible.
Takeaway: Real datasets are highly unlike the high complexity samples from the uniform distribution where learning is impossible.
4 LOW-COMPLEXITY BIAS IN MACHINE LEARNING MODELS
In the previous section, we saw that real-world data distributions across domains are highly structured and therefore have low Kolmogorov complexity. If we can construct models which prefer the same low-complexity data, then we can hope to perform inference with a single model across many such domains. While early machine learning systems incorporated highly domain-specific design elements, such as handcrafted image features (Dalal & Triggs, 2005) or graphical models for language modeling (Mnih & Hinton, 2007), modern neural network architectures across domains are converging on transformers (Vaswani et al., 2017; Dosovitskiy et al., 2020; Gulati et al., 2020; Somepalli et al., 2021), some of which can simultaneously achieve impressive performance on a variety of data types with a single architecture (Jaegle et al., 2021).
In this section, we argue that neural networks have a generic simplicity bias that extends well beyond the datasets for which they are designed. To this end, we: (1) feed tabular datasets from diverse
1https://www.openml.org/
domains such as click prediction and airline delay prediction into neural networks designed specifically for computer vision and find that they prefer naturally occurring labelings to random ones, (2) formulate a language with respect to which we can measure the Kolmogorov complexity of numerical sequences and observe that GPT-3 generates low-complexity sequences with exponentially higher probability than complex ones, (3) predict the next term in a sequence with randomly initialized language models. Whereas the no free lunch theorem of Wolpert (Wolpert, 1996) ensures that such an inference procedure cannot outperform a random guess on average, we find that randomly initialized neural networks prefer sequence completions which generate low-complexity completed sequences, demonstrating that they can make accurate guesses as long as the ground truth sequence distribution also favors low complexity.
4.1 NEURAL NETWORKS PREFER NATURALLY OCCURRING LABELINGS ACROSS DOMAINS
The inductive biases of even specialized architectures like convolutional neural networks facilitate broad learning abilities. To demonstrate this fact, we take tabular classification datasets and encode the tabular features as an image by simply forming images with one channel so that each pixel corresponds to a different feature, and padding the edges with zeros as needed. We train a small 9 layer convolutional network using this input data to predict the classification labels. Since the data has no local or translation equivariant structure, learning with the convolutional network requires breaking these structures. Even in spite of this mismatch, the convolutional networks perform much better than random chance. Furthermore, using the PAC-Bayes compression methodology from Lotfi et al. (2022b), we are able to find a compressible set of parameters providing nonvacuous generalization bounds on the model’s performance, as shown in Figure 2. Despite violating the locality and translation equivariance inductive biases of the architecture, the model is still highly compressible when fitting the tabular data and provably generalizes, suggesting that a core part of these inductive biases are general and shared with this tabular data.
Takeaway: While architectures such as CNNs have inductive biases tailored for specific problems, much of their inductive bias is general, extending across domains.
4.2 GPT-3 ASSIGNS EXPONENTIALLY HIGHER PROBABILITY TO SIMPLER SEQUENCES
We now study the preference of GPT-3—a line of high-performance autoregressive text generators— has for simpler sequences. The ability of language models to solve reasoning problems has recently been studied by Zelikman et al. (2022), who develop an effective prompting framework, and d’Ascoli et al. (2022), who develop transformers for predicting symbolic expressions directly from the sequence. To perform our own study, we need a well-defined, computable notion of complexity. We thus define a simple, non-Turing-complete language and measure Kolmogorov complexity with respect to this simple language. Namely, we generate integer sequences with binary expression trees. We then define the complexity of a sequence as the size of the smallest expression tree, measured as the number of internal nodes—or equivalently the number of operators in the expression represented by the tree—that generates that sequence. By using a small set of terms for the leaves and binary operators for the nodes, we can enumerate over all possible expression trees for sufficiently small sizes at most L and compute all sequences with complexities 0 through L.
In our experiments, we use operations +,×, and //, where // denotes integer division. For leaves, we use 2 and i, where i is the index within the sequence. For example, (2 + i) × i could be implemented with a tree of size 2 and would generate the sequence ai = 0, 3, 8, 15, ... Using this setup, we are able to generate sequences of varying complexity, according to a well-defined metric, and quantify the preference of GPT-3 models for simpler sequences over more complex ones. We provide details on how we tokenize sequences and extract their probabilities in Appendix D.
In Figure 3, we measure the average log-probability GPT-3 models assign to sequences of a given Kolmogorov complexity, where we fix the number of numerical tokens input into the model to be 30, and we observe that the probabilities assigned by these language models decrease exponentially with sequence complexity, similar to the Solomonoff prior discussed in Section 2. In contrast, a uniform prior would be described by a flat line. Note that for very complex sequences, we cannot easily measure their minimum description length, so we limit our experiments to expression trees with at most 7 operators. In this low-complexity regime, we observe that bigger GPT-3 models which are better at language modeling, such as Davinci which contains 175 billion parameters, assign higher probability to these simple sequences than much smaller GPT-3 models such as Ada. The legend lists GPT-3 variants in order of increasing size.
We can also examine the decay of such log-probabilities as we feed more tokens, corresponding to digits of sequence elements, into the model. As the sequences get longer, we see in Figure 3 that the probabilities assigned to sequences decay sub-exponentially, indicating that these models, especially bigger variants such as Davinci, become more and more confident about later sequence elements.
1
4.3 EVEN RANDOMLY INITIALIZED LANGUAGE MODELS PREFER LOW COMPLEXITY
The previous section solely examined pre-trained language models, but these models have been trained on massive corpora of data. Do they prefer low complexity before they have even seen any data at all? While initializers of such models follow highly diffuse distributions over parameters, such random parameters can induce highly structured functions.
Trained language models are known to repeat themselves when generating text (Holtzman et al., 2020; Fu et al., 2021). One might think that this behavior is learned from training data
which often contains repeated text, but we show in this section that randomly initialized GPT models repeat themselves too. Interestingly, we can formalize the preference for repetition as a preference for low Kolmogorov complexity. In order to disentangle the impact of initialization from training,
we adopt a simple language for generating binary sequences under which we can quickly measure Kolmogorov complexity. We consider a program to be a bitstring, and then the program upon execution simply repeats the bitstring until output reaches length 10. Under this language, the sequence 0, 0, 0, ... has Kolmogorov complexity 1, and 0, 1, 0, 1, ... has complexity 2, yet randomly generated sequences are exponentially more likely to have high complexity. We conduct our evaluations exhaustively on all such sequences of length 10.
We now generate sequences of length 10 with randomly initialized GPT-2 language models (Radford et al., 2019), such that each initialization is used to generate one sequence, and we measure the frequency with which each such sequence is generated. The estimated generation probabilities by Kolmogorov complexity are found in Figure 4, where we see again that low-complexity sequences are assigned exponentially higher probabilities. Here, we see that pre-trained checkpoints exhibit an even stronger preference for low complexity as they are trained on structured text. We can also use the randomly initialized language models to perform next element prediction by estimating the probabilities they assign to the next element in a sequence given having correctly generated the previous terms. While Wolpert’s no free lunch theorem (Wolpert, 1996) ensures that the average completion accuracy over all possible length 10 bitstrings is exactly 0.5, we verify in Figure 4 that randomly initialized networks can be used for sequence completion when the sequence is of low complexity. Additional details and experiments with other GPT-2 architecture variants are found in Appendix E.
We can further generate very long length-100 sequences with randomly initialized and also pretrained GPT-2 models and run a simple hypothesis test, demonstrating both randomly initialized and pre-trained models generate lower Kolmogorov complexity sequences on average than a uniform distribution. Over 100,000 samples from each of these three generative distributions and with a onetailed t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), where SGPT and SU respectively denote random sequences generated by the language model or a uniform distribution, we reject this null hypothesis in both randomly initialized and pre-trained models with an extremely low p-value, indicating that language models are indeed more likely to generate simple sequences, pretrained models even moreso. Further details can be found in Appendix E. We conclude that neural networks for language generation, both trained and randomly initialized, express a bias towards low Kolmogorov complexity which mirrors that of data as demonstrated in Section 3.
Takeaway: Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity.
5 MODEL SELECTION WITH A PREFERENCE FOR SIMPLICITY
In typical industrial workflows, practitioners examine their data and select an appropriate learner. For example, if data are linearly separable, then a practitioner might use logistic regression, whereas if the data is more complex, the same practitioner may instead choose a neural network. We can then consider the human model selector and the model they select as a single meta-learner. While early works conjectured that automated meta-learners are ruled out by no free lunch theorems (GiraudCarrier & Provost, 2005), subsequent works show that model selection can in fact be automated in practice (Vilalta & Drissi, 2002). Giraud-Carrier & Provost (2005) shows that with minimal assumptions, the defeating conclusion of the no free lunch theorem can be escaped by meta-learners. Subsequent works prove the existence of meta-induction algorithms, for selecting amongst all induction methods, which are near-optimal among learners (Schurz, 2008; Schurz & Thorn, 2017). In this section, we argue why in principle, model selection can be automated from the view of complexity.
5.1 MODEL SELECTION AND GENERALIZATION BOUNDS
When developing a machine learning approach for a given application, it is often helpful to leverage domain knowledge in constructing or choosing the right model for the task. One might start by choosing from architecture families like MLPs, CNNs, GNNs, PointNets, or Transformers and then decide on the appropriate way of featurizing the inputs, possibly incorporating knowledge of data symmetries in via hard-coded equivariances or data augmentations. Even just enumerating the possible models a practitioner would consider and choosing based on their expertise and prior knowledge, the number is not tremendously large. Even if we are extremely generous and suppose that the prac-
titioner is choosing from 100 million models, we can consider the impractical algorithm of selecting between them via cross validation. While one might expect that such a procedure would overfit, in fact even finite hypothesis bounds show this is not the case. Using cross validation on a validation set of size n = 20000 for a classification problem, plugging in a uniform prior P (h) = 10−8 to Equation 1, we get that the gap between validation and test error will be less than 2.9% with probability greater than 99%. Ultimately, this is the case because we only need a number of data points proportional to the log of the size of the hypothesis space for finite hypothesis bounds.
Considering even a more general class of models, one may consider the number of bits of prior knowledge needed to specify the model architectures like MLPs, CNNs, or GNNs as well as symmetry groups and any other required information. In each of these cases, the model architectures can in fact be expressed with a small number of bits. A near state-of-the-art computer vision model can be expressed in only 280 characters (Trockman & Kolter, 2022) of PyTorch. Similarly, important symmetry groups like translations, rotations, reflections, and other matrix groups can be expressed in only a few lines of code (Finzi et al., 2021) and can be used to encode equivariances or used for augmentation. Therefore, even in selecting from all possible models that can be expressed in that short amount of code, we can expect to generalize with only tens of thousands of data points.
Takeaway: In principle, automating model selection directly via cross validation provably generalizes well across millions of models with only thousands of data points.
5.2 ONE MODEL FOR BIG AND SMALL TRAINING SETS
It is commonly believed that small training datasets demand compact neural network architectures, whereas large training datasets can accommodate flexible architectures. Accordingly, practitioners hand select appropriate models for their datasets. In this section, we argue that a single learner can be effective for all data sizes as long as we encode our a priori preference for simplicity. Our prior should prefer simple functions we believe are more likely yet also support a wide variety of functions in case training data rules out our preferences. We begin with a simple illustration.
Polynomial regression. Common intuition dictates that high degree polynomials easily overfit their training data and should be avoided on small training sets. In contrast, low degree polynomials cannot fit complicated functions so they should be avoided when training data is plentiful. However, we demonstrate here that we can rely on lower order terms insofar as they can explain the data while still allowing higher order terms if they are necessary.
To this end, we adopt Tikhonov regularization with a Tikhonov matrix given by diag({αk2}dk=0), that is we impose an ℓ2 penalty which increases quadratically with the order of the corresponding monomial. We consider three models: non-regularized degree 2 and degree 10 polynomials as well as a degree 10 polynomial regularized as mentioned above. In Figure 5, we perform regression on the cosine target function (left) and degree 2 (center) and degree 10 (right) polynomial target functions. We confirm that even though a high degree polynomial model performs poorly with few training samples, it eventually outperforms its lower degree counterpart. Meanwhile, a high degree polynomial model with the above Tikhonov regularization outperforms both non-regularized models across most sample sizes on cosine and degree 10 polynomial regression problems, and it notably
matches the performance of a degree 2 model even on data generated by a degree 2 polynomial. We include details regarding the example target functions and the data sampling process in Appendix F.
Neural networks. We illustrate a similar concept with neural networks. We consider a small network, GoogLeNet (Szegedy et al., 2015), which performs well on small datasets such as CIFAR-10 and CIFAR-100 (Krizhevsky, 2009) but poorly on larger datasets like ImageNet (Deng et al., 2009). We also consider a large network, ViT-B/16 (Dosovitskiy et al., 2020), which performs significantly worse on CIFAR variants but much better on ImageNet. As in the polynomial regression example above, we can combine these two architectures, specifying our preference for the simpler GoogLeNet to the extent that it fits the training data. To learn on a dataset, we train both models and then take a convex combination of their logits, c ∗ logitsViT + (1 − c) ∗ logitsG, controlled by a single parameter c with ℓ2 regularization in favor of GoogLeNet. In Figure 6, we observe that while GoogLeNet and ViT each have strengths
and weaknesses, combining them with a preference for simplicity achieves the best of both worlds. While aggressively restricting our architectures can decrease computational cost, it is unnecessary for generalization. Details and additional experiments with Swin Transformer (Liu et al., 2021) are found in Appendix F.
6 DISCUSSION
While large ML models are highly flexible, we saw in this work that they reliably prefer low Kolmogorov complexity solutions—aligning well with relevant learning problems—despite not being designed with complexity in mind. This observation raises the question: why exactly do neural networks encode such a strong preference for low complexity and how can we tune this preference? Complementing the above observation, we also saw that a single expressive model which simultaneously supports a wide variety of solutions but prefers simple ones can simultaneously solve simple and hard problems over sample sizes. Such learners present clear advantages over the current paradigm in deep learning in which we manually select small constrained architectures or large ones with mild inductive biases, depending on the problem. Keeping this possibility in mind, how can we design expressive yet simplicity-biased models but with affordable computational costs?
A UNIVERSAL INDUCTION AND COMPLEXITY IN DEEP LEARNING
Universal induction. Inspired by Kolmogorov complexity, a line of work introduces universal induction methods, which prefer low complexity answers (Solomonoff, 1964; Hutter, 2000; Nakkiran, 2021). Notably, Solomonoff induction (Solomonoff, 1964; Rathmanner & Hutter, 2011) places a probability distribution over bitstrings which vanishes exponentially with their Kolmogorov complexity with respect to a universal Turing machine. Then, presented with an observed sequence (e.g. training data) called a prefix and a candidate completion, we can perform inference via p([prefix, completion] | prefix). In other words, we assign probability to each completion proportional to its probability among sequences which begin with the prefix. Since low-complexity bitstrings are assigned higher probability, we will prefer completions which result in low-complexity completed sequences. Subsequent literature argues that Solomonoff induction explains why no free lunch theorems are practically irrelevant, namely weak assumptions on the structure of learning problems allow for near-optimal universal learners (Lattimore & Hutter, 2013).
Complexity in deep learning. Several works have related Kolmogorov complexity to neural networks. Pearlmutter & Rosenfeld (1990) argues that random initialization and noise in data increase the complexity of neural networks but that ensembling such models reduces complexity in expectation. Schmidhuber (1997) proposes another method for reducing Kolmogorov complexity by instead explicitly searching for simple neural networks and finds improvements in generalization on very small problems where such a search is computationally feasible. Another line of study proves that multi-layer perceptrons with boolean input features are biased towards low-entropy functions, namely ones which classify disproportionately many or few points into the same class (Mingard et al., 2019) or are insensitive to flips in the boolean features (De Palma et al., 2019).
Other notions of simplicity have also been proposed and applied to neural networks, for example involving the implicit bias of SGD (Damian et al., 2021); the connection between flat loss minima with wide-margins decision boundaries and improved generalization (Huang et al., 2020; Izmailov et al., 2018); the tendency of models to rely on simple, compressible features (Geirhos et al., 2020); or the potential for overparameterized networks to be implicitly regularized, leading to capacity control (Neyshabur et al., 2014). The marginal likelihood, or the probability that a random sample from the prior generates the data, is a natural statement of Occam’s razor or the idea that simple models should be preferred (Lotfi et al., 2022a). Diffuse priors which spread mass out around the diverse high-complexity solutions are unlikely to generate observed data, so effective models admit a far higher marginal likelihood on naturally occurring labelings compared to random ones (Wilson & Izmailov, 2020).
B A KOLMOGOROV NO FREE LUNCH THEOREM PROOF
Rewriting Equation (2), we have
CE(p) ≥ ln 2 n (K(Y |X)−K(p)− 2 log2 K(p)− c) .
Note that by simply counting all possible programs taking input X , there are less than 2k+1 labelings Y with K(Y |X) ≤ k. Note that there are Cn distinct labelings, from which we are drawing uniformly. So that
P(K(Y |X) > n log2 C −m) = 1− P(K(Y |X) ≤ n log2 C −m) ≥ 1− P(K(Y |X) ≤ ⌈n log2 C⌉ −m)
≥ 1− 2 ⌈n log2 C⌉−m+1
Cn
≥ 1− 22−m. Alternatively, with probability at least 1− δ,
K(Y |X) > n log2 C − log2 1
δ − 3.
Thus, with probability at least 1− δ, we have for every classifier p,
CE(p) ≥ lnC − ln 2 n (K(p) + 2 log2 K(p) + log2 δ + c) .
C PAC-BAYES COMPRESSION EXPERIMENTAL DETAILS
For the OpenML tabular classification datasets, we preprocess them first by balancing the classes, subsampling all classes down to the number of examples of the least likely class. This way, when compressing the datasets, any result achieved is nontrivial in contrast with a very class imbalanced dataset. We heavily follow the compression method of (Lotfi et al., 2022b), including the small 9 convolutional architecture which they use to generate their bounds. When cramming the tabular data into this convnet, we combine numerical features with one hot encoded categorical features and then pack these into the pixels of a 1 channel image, using however large an image as necessary to fit each of the different features inside.
With respect to the sizes of the random subspaces that we train the compressed models in, we consider 250,500,1000, and 2000. For tabular label compression, we employ a 2 hidden layer MLP with hidden dimension k = 192, and we consider the same 250,500,1000, and 2000 values for subspace dimension. We train for 80 epochs with 20 epochs of quantization at a batch size of 512 using Adam at lr= 3×10−4. For image classification label compression, we use the 9-layer convnet with subspace dimensions 2000, 3000, 5000, and we train for 80 epochs using SGD at learning rate 0.1 and quantize for the remaining 20 epochs, at a batch size of 50. For calculating the length of the code for model architecture and decompressor, we need only the implementation of the model, the arithmetic decoder, and the loading of the quantized values. Removing wasted bits, we minified the python file, leading to a size of approximately 2.5KB.
D GPT-3 EXPERIMENTAL DETAILS
To feed sequences into a model, we split up sequence elements into individual byte-pair encoding tokens corresponding to their decimal digits, and we place comma tokens between sequence elements as delimiters, also beginning every input with an <|endoftext|> token. We choose to use the byte-pair encoding of decimal digits with a space inserted before the digit, e.g. ‘ 0’ as this is known to enhance the ability of language models to perform arithmetic (Zelikman et al., 2022). For example, the sequence 10, 11 will be split up into [‘<|endoftext|>’, ‘ 1’, ‘ 0’, ‘,’, ‘ 1’, ‘ 1’], and each element of the list is tokenized individually. Then, the log-probability of a sequence is given by the sum of the log-probabilities corresponding to the correct decimal digits in their respective slots of the model’s output. Note that various sequences will contain different numbers of decimal digits, and the sequence’s log-probability will decrease with every token. Therefore, in order for fair comparison, we limit all sequences to 30 decimal digit tokens and truncate there.
E SEQUENCE GENERATION AND COMPLETION WITH RANDOMLY INITIALIZED LANGUAGE MODELS
For these experiments, we use Huggingface2 GPT-2 architectures and pre-trained checkpoints. In order to estimate the probabilities assigned by randomly initialized language models to each bitstring, we generate one million random sequences, ensuring that many instances of each bitstring are generated as there are only 210 = 1024 bistrings of length 10.
We also include plots with other sizes of GPT-2 architectures in Figure 7 and Figure 8.
We also include the hypothesis test referenced in Section 4.3. For this experiment, we generate 100,000 length-100 sequences from randomly intialized GPT-2 variants, pre-trained GPT-2 variants, and a uniform distribution. We then perform a one-sided t-test on the null hypothesis that µ(K(SGPT)) ≥ µ(K(SU )), for both initialized and pre-trained models. Table 1 contains the resulting sample means, t-statistics and p-values. In all cases, we reject the null hypothesis with very low p-values, indicating that language models do prefer to generate low-complexity sequences. Notably, pre-trained language models exhibit an increased simplicity bias, and bigger and better language models even moreso.
2https://huggingface.co/
F BIG AND SMALL TRAINING SETS
Polynomial regression. We choose three example target functions on which to perform regression: cos( 3π2 x), x
2, and −36x + 49x5 − 14x7 + x10. Training data is randomly drawn from a uniform distribution over the unit interval, and we add noise to training labels from N (0, 0.1). In each case, for each dataset size, we average the mean squared error over 100 randomly sampled training sets. For Tikhonov regularized polynomial regression on the cosine and degree 2 polynomial target functions, we use α = 0.01, and we use α = 0.001 for regression on the degree 10 polynomial target function.
Image classification with neural networks. For ImageNet trained models, we employ publicly available checkpoints from torchvision3. We train models on CIFAR-10 and CIFAR-100 for 200 epochs with initial learning rate 0.1 and cosine annealing along with horizontal flip and random crop augmentations. We use SGD with momentum 0.9 and batches of size 128. All CIFAR images are rescaled to 224 × 224 so that we can use an identical model for ImageNet and CIFAR data. In order to learn the parameter c controlling the convex combination of models, we perform 10 epochs of training, where the models’ parameters are frozen, and we apply weight decay with coefficient 10−5. We learn the parameter c using SGD with momentum 0.9 and batch size 128, initial learning rate 0.1, and cosine annealing. | 1. What is the focus of the paper regarding inductive bias in neural networks?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of novelty and clarity?
3. Do you have any concerns regarding the interpretation of the results and their significance?
4. How does the reviewer assess the quality, clarity, originality, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This article proposes compressibility (in the sense of low Kolmogorov complexity) as a key aspect of the inductive bias of neural networks. In particular, a Kolmogov complexity-based no free lunch theorem is formulated. Moreover, several experiments showing that both realistic datasets and modern neural networks prefer no complexity data are presented.
Strengths And Weaknesses
At a high level, the main weakness of this article from my point o view is that I had an extremely hard time understanding what was new and why various takeaways/messages were interesting. In part this is because of my lack of background in complexity theory. In part I believe in a variety of places the article could have been improved by additional details:
In equation (1), what is the role of h_*?
In section 3.1 and elsewhere: why is it interesting to show that real data is compressible and so doesn’t have large Kolmogorov complexity? Isn’t that pretty obvious? If not, how should one interpret 393.2 MB vs 836.7 MB?
The takeaway “As long as a machine learning model fits the data better than random chance, it will form a lossless compression of the labels given a large enough dataset” is, as far as I understand, a completely general fact. Why is it phrased as a takeaway? Is there something novel in the present article about this point? And it is such a generic fact then why is it interesting?
The proof of Theorem 1 is almost trivial. Why is it an interesting result? Is this the first time a NFL has been stated in terms of Kolmogorov complexity?
It appears that the risk bounds presented in this article are in the regime of fixed model size and increasing dataset size. In that setting I don’t believe that non-vacuous generalization bounds are new or surprising. Perhaps I’m missing the point? It seems significantly more relevant to modern DL to work non-parameterically and allows model and dataset size to grow. In that case, it is not obvious to me which of these bounds will ultimately be non-vacuous.
I found the takeaway “Takeaway: Language models, both pre-trained and randomly initialized, prefer to generate low-complexity sequences. As a result, we can use even such randomly initialized models to predict the next element in a sequence, as long as the sequence is low-complexity.” Of section 4.3 very confusing. Namely, I think the authors are claiming that a typical sequence predicted by a randomly initialization or pre-trained language model will have low complexity. However, the authors seem to be arguing the converse of this statement: that one can use such models to predict the next token in a low complexity sequence. That seems like a much more ambitious claim.
Clarity, Quality, Novelty And Reproducibility
Quality: I had a lot of trouble assessing this since I found it hard to understand the main points.
Clarity: The prose was lucid but I didn’t not understand many of the logical and technical points.
Originality: I find it hard to assess this point. |
ICLR | Title
Bias Mimicking: A Simple Sampling Approach for Bias Mitigation
Abstract
Prior work has shown that Visual Recognition datasets frequently under-represent sensitive groups (e.g. Female) within a category (e.g. Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and sensitive attributes such as age, gender, or race. Most of the recent methods that address this problem require significant architectural changes or expensive hyper-parameter tuning. Alternatively, data re-sampling baselines from the class imbalance literature (e.g. Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, we found that some of these baselines were missing from recent bias mitigation benchmarks. In this paper, we show that these simple methods are strikingly competitive with state-of-the-art bias mitigation methods on many datasets. Furthermore, we improve these methods by introducing a new class conditioned sampling method: Bias Mimicking. In cases where the baseline dataset re-sampling methods do not perform well, Bias Mimicking effectively bridges the performance gap and improves the total averaged accuracy of under-represented subgroups by over 3% compared to prior work.
1 INTRODUCTION
Spurious predictive correlations have been frequently documented within the Deep Learning literature (Wang et al., 2020a; Zhao et al., 2021). These correlations can arise when most samples in class y (e.g. blonde hair) belong to a sensitive group b (e.g. female). Thus, the model might learn to use the signal from b to predict y. Mitigating this spurious correlation (Bias) involves decorrelating the model’s predictions of y from the dataset-sensitive group b. Previous research efforts have primarily focused on model-based solutions. These efforts can be mainly categorized into two directions 1) methods that require significantly more model parameters during inference (Wang et al., 2020b), which harms model scalability as we increase the number of sensitive groups/target-classes. 2) methods that introduce additional loss functions and require expensive hyper-parameter tuning (Hong & Yang, 2021; Kim et al., 2019a; Ryu et al., 2017; Tartaglione et al., 2021).
Dataset re-sampling methods, popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018), present a simpler and cheaper alternative. Moreover, as illustrated in Figure 1(a), they can be easily extended to Bias Mitigation by considering the imbalance within the dataset subgroups rather than classes. Most common of these methods are Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Oversampling (Wang et al., 2020b). They mitigate class imbalance by altering the dataset distribution through dropping/repeating samples, respectively. Another similar solution is Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) which levels each sample contribution to the loss function by appropriately weightings its loss value. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant portion of the dataset, which could harm models’ predictive capacity. Moreover, Upweighting can be unstable when used with stochastic gradient descent (An et al., 2021). Finally, models trained with Oversampling, as shown by Wang et al. (2020b), are likely to overfit due to being exposed to repetitive sample copies.
To address these problems, we propose Bias Mimicking (BM): a class-conditioned sampling method that mitigates the shortcomings of prior work. As shown in Figure 1(b), for each y ∈ Y , BM maintains a different version of the dataset target labels dy . Each dy preserves y samples while
Undersampling y′ ̸= y such that the bias within class y′ mimics that of y. Consequently, for each dy , Y is statistically independent from B, i.e. Pdy (Y = y|B = b) = Pdy (Y = y). A naive way of using each dy is to dedicate a different multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, we binarize each dy . Then, we train a different one-vs-all binary classifier on each dy . BM addresses the shortcomings of prior work’s sampling methods. For example, using every dy to train the model exposes it to the entire dataset, whereas Undersampling can discard significant numbers of samples. Our approach also does not use weights to scale the loss function. Thus our method should avoid stability issues using stochastic gradient descent suffered by Upweighting. Finally, since each binary predictor is not exposed to repetitive samples, our model is less prone to overfitting.
In addition to proposing Bias Mimicking, another contribution of our work is providing an extensive analysis on sampling methods for bias mitigation tasks. We found many sampling-based methods were notably missing in the comparisons used in prior work (Tartaglione et al., 2021; Wang et al., 2020b; Hong & Yang, 2021). Despite their shortcomings, we show that Undersampling and Upweighting are surprisingly competitive on many bias mitigation benchmarks. Therefore, this emphasizes these methods’ importance as an inexpensive first choice for mitigating Bias. However, for cases where these methods are not as effective, Bias Mimicking effectively bridges the performance gap and improves over prior work model-based methods. Finally, we thoroughly analyze our approach’s behavior through two experiments. First, it is unclear how sensitive our method is to the mimicking condition. Therefore, in Section 4.3, we simulate various scenarios where the Bias is not perfectly mimicked and note model performance. Second, we verify the importance of each dy to the method predictive performance in Section 4.4. Both experiments showcase the importance of our design in mitigating Bias.
Our contributions can be summarized as:
• We show that simple resampling methods are be competitive on many benchmarks when compared to expensive model-based state-of-the-art approaches. • We introduce a novel resampling method: Bias Mimicking that improves the average accuracy over under-represented subgroups by 3% over multiple datasets. • We conduct an extensive empirical analysis of Bias Mimicking that details the method’s sensitivity to the Mimicking condition and uncovers various insights about its behavior.
2 RELATED WORK
Documenting Spurious Correlations: Bias in Machine Learning can manifest in many ways. Examples include class imbalance Japkowicz & Stephen (2002), historical human biases Suresh & Guttag (2019), evaluation bias et al. (2018), and more. For a full review, we refer the reader to Mehrabi et al. (2021). In our work, we are interested in model bias that arises from spurious correlations. A spurious correlation results from under-representing a certain group of samples (e.g. samples with color red) within a certain class (e.g. planes). This leads the model to learn the false relationship between the class and the over-represented group. Prior work has documented several occurrences of this bias. For example, Singh et al. (2020); Hendrycks et al. (2021); Xiao et al. (2020); Li et al. (2020) showed that state-of-the-art object recognition models are biased toward backgrounds or textures associated with the object class. Agrawal et al. (2018); Clark et al. (2019) showed similar spurious correlations in VQA. Zhao et al. (2021) noted concerning correlations between captions and attributes like skin color in Image-Captioning datasets. Beyond uncovering these correlations within datasets, prior work like Wang et al. (2020b); Hong & Yang (2021) introduced synthetic bias datasets where they systematically assessed bias effect on model performance.
Model based solutions: In response to documentation of dataset spurious correlations, several model focused methods have been proposed (Ryu et al., 2017; Kim et al., 2019a; Wang et al., 2020b; Hong & Yang, 2021; Tartaglione et al., 2021). For example, Tartaglione et al. (2021) presents a metric learning-based method where model feature space is regularized against learning harmful correlations. Wang et al. (2020b) surveys several existing methods such as adversarial training, that randomizes the relationship between target classes and sensitive groups in the feature space. They also present a new method, domain-independence, where different prediction heads are allocated for each sensitive group. Most recently, Hong & Yang (2021) extended contrastive learning frameworks on self-supervised learning (Chen et al., 2020; He et al., 2020; Khosla et al., 2020) to mitigate bias. Our work complements these efforts by introducing a hyper-parameter-free re-sampling algorithm that improves over state-of-the-art performers.
Dataset based solutions In addition to model-based approaches, we can mitigate spurious correlations by fixing the training dataset distribution. Examples include Oversampling minority classes and Undersampling majority ones, which are popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018). However, as we note in our introduction, some of these methods have been missing in recent visual Bias Mitigation Benchmarks (Wang et al., 2020b; Tartaglione et al., 2021; Hong & Yang, 2021). Thus, we review these methods and describe their shortcomings in Section 3.1. Alternatively, other efforts attempt to fix the dataset distribution by introducing completely new samples. Examples include (Kärkkäinen & Joo, 2021) where they introduce a new dataset for face recognition that is balanced among several race groups, and Ramaswamy et al. (2020) where they used GANs to generate training data that balance the sizes of dataset subgroups. While a dataset-based approach, our work differs from these efforts as it does not generate or introduce new samples. Finally, also related to our work are sampling methods like REPAIR (Li & Vasconcelos, 2019) where a function is learned to prioritize specific samples and, thus, learn more robust representations. However, unlike our method, REPAIR does not make use of sensitive group labels and thus is not able to target specific spurious correlations.
3 SAMPLING FOR BIAS MITIGATION
In bias mitigation the goal is to remove the effect of spurious correlations to prevent a model from correlating its predictions with sensitive groups of samples. More formally, assume we have a dataset of image/target/sensitive-group triplets (X,Y,B). Define gy,b = {(xi, yi, bi) s.t yi = y, bi = b}, i.e. the subgroup of samples that share the target class y and sensitive group b. Spurious correlations occur when the samples of a target class y (e.g. blonde hair) are over-represented by one sensitive group b (e.g. female) rather than distributed equally among the dataset sensitive groups. In other words, assuming PD(X,Y,B) is the distribution of the training data, then PD(B = b, Y = y) >> 1 |B| where |B| denotes the number of sensitive groups. Consequently, the model may use the signal from B to predict Y . For example, if most blonde hair samples were female, the model might learn to predict blonde hair every time it sees a female sample. Thus, the goal of this task is to train the model such that P (Ŷ |X,B) = P (Ŷ |X) where Ŷ denotes model predictions.
Our work addresses methods that fix spurious correlations by ensuring statistical Independence between Y and B on the dataset level, i.e. PD(Y |B) = P (Y ). As we note in our introduction, some of the solutions that fall under this approach are missing from prior work benchmarks. Therefore, We briefly review the missing methods in Section 3.1. Then, we introduce a new sampling method: Bias Mimicking which addresses prior work sampling methods shortcomings in Section 3.2.
3.1 SIMPLE SAMPLING METHODS
As discussed in the introduction, the class imbalance literature is rich with dataset re-sampling solutions (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018; Chawla et al., 2002). These solutions address class imbalance by balancing the distribution of classes. Thus, they can be extended to Bias Mitigation by balancing subgroups instead of classes. Prior work in Visual Bias Mitigation has explored one of these solutions: Oversampling (Wang et al., 2020b). This approach mitigates dataset bias by replicating the samples of the underrepresented subgroups until all subgroups are balanced. However, Wang et al. (2020b) has demonstrated that Oversampling is not effective at mitigating bias, likely because the model sees repetitive sample copies, thus resulting in overfitting. In addition to Oversampling, Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) are other popular methods. Both methods, however, have not been benchmarked in recent Visual Bias Mitigation work (Tartaglione et al., 2021; Hong & Yang, 2021). We review these solutions below.
Undersampling drops samples from the majority classes until all classes are balanced. We can extend this solution to Bias Mitigation by dropping samples to balance dataset subgroups. More concretely, we subsample every subgroup in the dataset to match the size of the smallest subgroup, i.e. min
y,b |gy,b|. However, a critical shortcoming of Undersampling is that it drops a significant portion of the dataset and thus could harm the model’s predictive capacity. This might explain its absence from recent bias mitigation benchmarks. We address this shortcoming in our proposed method, Bias Mimicking, in Section 3.2.
Upweighting Upweighting levels the contribution different samples to the loss function by multiplying its loss value by the inverse of the sample’s class frequency. We can extend this process to Bias Mitigation by simply considering subgroups instead of classes. More concretely, assume model weights w, sample x, class y, and subgroup gy,b where x ∈ gy,b, then the model optimizes:
L = Ex,y,g [ 1 pgy,b l(x, y;w) ]
where pgy,b = |gy,b|∑ y,b |gy,b|
computed over the training dataset. A key shortcoming of Upweighting is its instability when used with stochastic gradient descent (An et al., 2021). Indeed, we demonstrate this problem in our experiments where Upweighting does not work well on some datasets.
3.2 BIAS MIMICKING
The goal in our method is to decorrelate a model’s predictions Ŷ from sensitive attributes B. Our approach is inspired by Sampling methods, which enforce this independence on the dataset level (i.e.PD(Y |B) = PD(Y )). However, simple sampling methods like Oversampling, Undersampling, and Upweighting suffer from critical shortcomings as outlined in Section 3.1. We address these shortcomings in our proposed sampling algorithm: Bias Mimicking below.
Algorithm The key principle of our algorithm is the observation that if bias toward sensitive group b was proportionally equal among all the target classes, then for every y ∈ Y , P (Y = y|B = b) = P (Y = y), i.e. y is statically independent from b. More concretely:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Refer to Appendix A for proof. Note that ensuring this proposition holds for every b ∈ B implies that P (Y |B) = P (Y ), in other words Y is independant from B which in turns means that the resulting model’s predictions will not be strongly correlated with B. Refer to Section 4.3 for an empirical sensitivity analysis of this result. Note how under-sampling is a special case of this result where indeed Y is independent of B.
We use this result to motivate a novel way of Undersampling the input distribution that ensures the model is exposed to every sample in the dataset while preventing a spurious correlation. To that end, for every class y, we create a different version of the dataset target labels. Denote each version as dy . Each dy preserves its respective class y samples while Undersampling every y′ ̸= y such that
Pdy (B = b|Y = y) = Pdy (B = b|Y = y′) ∀b ∈ B (1)
As a result, each dy ensures the independence of Y from B. A naive way of using each dy would be to dedicate a multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, We binarize each dy and then use it train a one-vs-all binary classifier BPy for each y. Each head is trained on image-target pairs from its corresponding distribution dy as Figure 2 demonstrates. Consequently, since each dy preserves y samples, the model backbone sees the entire dataset through the different signals backpropagating from each BPy . Moreover, since the same bias is mimicked for every class in dy , each incoming signal from BPy does not backpropogate spurious correlations, thus, Y correlation with B should be minimized within the model learned feature representation.
Using the binary classifiers during inference is challenging since each was trained on different distributions and are, therefore, uncalibrated compared to each other. However, we know that Bias Mimicking minimizes the correlation between Y and B within the feature space. Thus, we exploit this fact by training a multi-class prediction head over the feature space while preserving the original dataset distribution. Note that we stop the gradient from flowing into the model backbone to ensure that the bias is not learned again. In our experiments, we found that this achieves state-ofthe-art performance on many benchmarks. However, on some, we found that we could obtain a slight improvement in performance by Oversampling the distribution of the features used to train the multi-class prediction head. Oversampling, unlike when used to train the entire model, did not overfit in this setting because 1) the multi-class prediction head is a simple linear layer which is less likely to overfit than an overparameterized model like a neural net 2) the features learned by our
method do not take gradients from the oversampled approach, helping to ensure that the neural net can not overfit over the reptitive samples. Refer to Appendix B for further analysis. Given these results, we use Oversampling to train the multi-class prediction head throughout our experiments.
Cost Analysis Unlike prior work (Hong & Yang, 2021; Tartaglione et al., 2021), bias mimicking involves no extra hyper-parameters. The debiasing is automatic by definition. Moreover, unlike prior work (Wang et al., 2020b) where an that used each prediction head trained on every protected attribute (i.e. O(B)), bias mimicking uses only a single prediction head during inference (i.e. O(1)), discarding the attribute specific prediction heads at test time. Therefore, our method is simpler and more efficient during inference. Finally, note that the additional target labels dy do not result in longer epochs; we make one pass only over each sample x during an epoch and apply its contribution to the relevant BPy(s) accordingly.
4 EXPERIMENTS
We report our method performance on two benchmarks: a binary classification benchmark in Section 4.1 and a multi-class classification benchmark in Section 4.2. In addition to reporting our method results, we expand both benchmarks, which mainly focus on model-based methods, by including the basic sampling methods outlined in Section 3, namely Undersampling, Upweighting, and Oversampling. Then, we follow up our results with two main experiments that analyze our method’s behavior. The first experiment in Section 4.3 is a sensitivity analysis of our method to the mimicking condition. The second experiment in Section 4.4 analyzes the contribution of each version of the target labels dy to model performance.
4.1 BINARY CLASSIFICATION BENCHMARK
Datasets and MetricsWe compare methods using CelebA dataset (Liu et al., 2015) and UTKFace dataset (Zhang et al., 2017). Following prior work (Hong & Yang, 2021; Tartaglione et al., 2021), we train a binary classification model using CelebA where BlondHair is a target attribute and Gender is a bias attribute. Note that prior work (Hong & Yang, 2021; Tartaglione et al., 2021) used the HeavyMakeUp attribute in their CelebA benchmark. However, during our experiments, we found serious problems with the benchmark. Refer to Appendix C for model details. Therefore, we skip
this benchmark in our work. For UTKFace, we follow Hong & Yang (2021) and do the binary classification with Race/Age as the sensitive attribute and Gender as the target attribute. Methods are evaluated in terms of Unbiased Accuracy (Hong & Yang, 2021) which measures the accuracy on a test set balanced among each subgroup, and Bias-Conflict (Hong & Yang, 2021) which measures the accuracy on the minority subgroups. Refer to Appendix D for more details.
Baselines We use the baselines reported in prior work (Hong & Yang, 2021) benchmark. These methods either require additional model parameters or hyper-parameters. ”Bias-Contrastive and Bias-Balanced Learning” (BC + BB) (Hong & Yang, 2021) uses a contrastive learning framework to mitigate bias. ”Learning Not to Learn” (LNL) uses an additional prediction branch to minimize the mutual information between the model feature and the bias label. Domain-independent (DI) (Wang et al., 2020b) uses an additional prediction head for each sensitive subgroup. EnD (Tartaglione et al., 2021) uses a regularizer that disentangles the features of the same bias class samples. Furthermore, we expand the benchmark by reporting the performance of sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results in the binary classification benchmark in Table 1 indicates that our method (BM) maintains strong averaged predictive accuracy (UA) while significantly improving performance over the under-represented subgroups (BC) by > 3% when compared to prior work model-based methods. Moreover, our method’s strong performance does not require additional expensive hyper-parameter tuning or model parameters, unlike prior work methods. Therefore, our method is simpler and more efficient. Furthermore, note that our method performs the best among the sampling methods. More concretely, BM performs significantly better than Oversampling. This aligns with (Wang et al., 2020b) observation that overparameterized models like neural nets tend to overfit when trained on Oversampled distributions. Note that even though our method uses Oversampling to train part of the network, we do not see the same decline in performance. This is because, as discussed in Section 3.2, we only use the oversampled version of the dataset to train a simple linear layer over the learned debiased features. Undersampling demonstrates poor predictive performance on UTK-Face due to Undersampling a significant portion of the dataset, which is then reflected in the average performance. Surprisingly, however, the method performs quite well on CelebA. This is likely because the task (predicting hair color) is easy and does not require many samples. Thus, this emphasizes the method’s importance as a reasonable first step in case sufficient data is available. Overall, however, the average performance of our method is significantly better. Finally, note that while Upweight-
ing demonstrates competitive performance on CelebA, it lags behind significantly over Utk-Face. This is likely due to the method’s instability with respect to stochastic gradient descent (An et al., 2021). Thus, when comparing the method’s averaged performance to ours, we note that our method improves performance by over 1%.
4.2 MULTI-CLASS CLASSIFICATION BENCHAMRK
Datasets and Metrics We use the bias controlled CIFAR-S benchmark (Wang et al., 2020b). The benchmark synthetically introduces bias into the CIFAR-10 dataset (Krizhevsky, 2012) by converting a sub-sample of images from each target class to gray-scale. We use (Wang et al., 2020b) version where half of the classes are biased toward color and the other half is biased toward gray where the dominant sensitive group in each target represent 95% of the samples. Methods are evaluated using the accuracy on two versions of the test sets: color and gray as well as the mean of both sets. In addition to accuracy, methods are evaluated with the bias amplification metric (Wang et al., 2020b).
Baselines We use the baselines reported in prior work (Wang et al., 2020b) benchmark. These methods either require additional model parameters or hyper-parameters. Adversarial Learning (Adv) w/ uniform confusion (Hoffman et al., 2015) and reversal projection (Zhang et al., 2018), introduces an adversarial loss that seeks to randomize model’s feature representation of Y and B, Domain Discriminative training (Wang et al., 2020b) and Domain Independence (DI) (Wang et al., 2020b) both introduce a prediction head for each dataset subgroup. However, they use different methods to train and perform inference. Refer to (Wang et al., 2020b) for more information. Furthermore, we expand the benchmark by including the sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results Observe the results in Table 2. Note that our method (BM) is able to maintain competitive performance over prior work state of the art (DI) while requiring no additional hyperparameters or model parameters at inference time. Moreover, when considering DI poor performance on the Binary Classification benchmark in Table 1, our method averaged performance with the results from Table 2 is significantly better. More notably, it is the only sampling method that maintains both strong predictive performance and significantly reduces the bias. Upweighting weaker predictive performance here can be attributed to its instability with respect to stochastic gradient descent (An et al., 2021). While Undersampling reduces bias significantly compared to prior work, it lags significantly with its predictive performance.
4.3 HOW SENSITIVE IS THE MODEL PERFORMANCE TO THE MIMICKING CONDITION?
Bias Mimicking is effective at mitigating Bias. However, it is not clear how sensitive the model’s learned spurious predictive behavior is to the Bias mimicking condition (Equation 1). Therefore, in this section, we seek to answer this question. To that end, we test multiple scenarios where the Bias is mimicked by a percentage value x ∈ {0, 100} such that 0% corresponds to no modification to the distribution and 100% corresponds to complete Bias mimicking. Following this definition,
we run the experiment on the datasets and attributes introduced by the Binary Classification Benchmark in Section 4.1 and evaluate performance using the metrics introduced in Section 4.1, namely Bias Conflict (BC) and Unbiased Accuracy (UA). Observe the result in Figure 3. Note that as the percentage of Bias Mimicked decreases, the BC and UA decrease as expected. This is because P (Y |B) ̸= P (Y ) following proposition 1. The best performance is achieved when x% is indeed 100% and thus P (Y |B) = P (Y ). From this analysis, we conclude that the Bias Mimicking condition is critical for good performance.
4.4 HOW DOES EACH dy AFFECT MODEL PERFORMANCE?
Bias Mimicking takes in the original dataset distribution of target labels and produces a different version for each class y, denoted as dy , where class y samples are preserved, and the bias of y is mimcked in the other classes. By using every dy , we expose the model to the entire dataset. In this section, we investigate the effect of each dy on performance. To that end, we perform an experiment using the binary classification tasks outlined in Section 4.1. For each task, thus, we have two versions of the dataset target labels d1, d2. We compare three different models performances: model (1) trained on only (d1), model (2) trained on only (d2) and finally model (3) trained on both (d1, d2). The last version being the one used by our method. We use the Unbiased Accuracy Metric (UA). We also break down the metric into two versions: UA1 where accuracy is averaged over class 1 subgroups, and UA2 where accuracy is averaged over class 2 subgroups. Note that, overall, the model trained on (d1) performs better at predicting y1 but worse at predicting y2 as outlined by results in Table 3. We note the same trend with the model trained on (d2) but in reverse. This disparity in performance harms the Unbiased Accuracy. However, a model trained on (d1, d2) balances both accuracies and achieves the best total Unbiased Accuracy (UA). These results emphasize the importance of each dy for good performance.
5 CONCLUSION
We first recognized that simple and hyper-parameter-free methods for bias mitigation like Undersampling and Upweighting were missing from recent benchmarks. This observation was especially relevant considering that state-of-the-art solutions require either expensive hyper-parameter tuning or architectural change. Therefore, we benchmarked these methods and concluded that some are surprisingly effective on many datasets. However, on some others, their performance significantly lagged behind model-based methods. Motivated by this observation, we introduced a novel sampling method: Bias Mimicking. The method retained the simplicity of sampling methods while improving over state-of-the-art model-based methods. Furthermore, we extensively analyzed the behavior of Bias Mimicking, which emphasized the importance of our design.
Future Work We demonstrate that dataset re-sampling methods are simple and effective tools in mitigating bias. However, we recognize that the explored bias scenarios in prior and our work are still limited. For example, current studies only consider mutually exclusive sensitive groups. Thus, a sample can belong to only one sensitive group at a time. How does relaxing this assumption, i.e. intersectionality, impact bias? Finally, the dataset re-sampling methods presented in this work are effective at mitigating bias when enough samples from all dataset subgroups are available. However, it is unclear how these methods could handle bias scenarios when one class is completely biased by one sensitive group. These are all questions that we hope future work could address.
Code of Ethics Statement Our work addresses a critical problem within the fairness literature: spurious predictive behavior by Deep Learning models. We measure this behavior by calculating the dataset’s subgroups’ predictive performance (accuracy). While this metric aligns with our goal of measuring and preventing spurious behavior, we emphasize that the metric is not exhaustive of other fairness concerns. We refer the reader to (Kleinberg et al., 2017) for a broader discussion of fairness metrics. Furthermore, while our proposed method aims to learn robust representations for underrepresented subgroups within a given dataset, we acknowledge that such representations could be misused in downstream applications that could raise ethical concerns (e.g. surveillance). Furthermore, two of the datasets used in our experiments involve facial attribute recognition tasks (e.g. hair color). Models trained on these datasets could be misused, yet they remain standard benchmarks for evaluating spurious correlations. We encourage future researchers and practitioners to use this technology with care and caution.
Reproducibility Statement We provide the source code for this work in the supplementary. The training and hyper-parameters to reproduce the results are outlined in Appendix D. Moreover, The results we report in this paper were averaged over 3 runs with different seeds, and standard deviations were reported to ensure statistical significance. Furthermore, all experiments were done on datasets that are publicly available online.
A PROPOSITIONS AND PROOFS
The central component of our proposed method is the insight that if the the bias toward sensitive group b was proportionally equal among every y ∈ Y , then P (Y = y|B = b) = P (Y = y). We provide a proof for this proposition below:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Proof. First, given the law of total probability: P (B = b) = ∑ y′∈Y P (B = b|Y = y′)P (Y = y′)
Given our assumption of bias mimicking, we can write ∀y ∈ Y : P (B = b) = P (B = b|Y = y) ∑ y′∈Y P (Y = y′) = P (B = b|Y = y) (2)
From here, using Bayesian probability and the result from (2), we can write, ∀y ∈ Y :
P (Y = y|B = b) = P (B = b|Y = y)P (Y = y) P (B = b) = P (B = b)P (Y = y) P (B = b) = P (Y = y)
B SAMPLING METHODS IMPACT ON MULTI-CLASS CLASSIFICATION HEAD
Bias Mimicking produces a binary version dy of the labels for each class y. Each dy preserves class y samples while undersampling each y′ such that the bias within y′ mimics y. A debiased feature representation is then learned by training a binary classifier for each dy . When the training is done, it is challenging to use the scores from each binary predictor for inference. This is because each predictor is trained on a different distribution of the data, so the predictors are uncalibrated with respect to each other. Therefore, to perform inference, we train a multi-class prediction head on top of the learned feature representations using the original dataset distribution, where we prevent the gradients from flowing into the feature space. Observe that this approach denoted by BM + Vanilla in Table 4 and Table 5 is effective at mitigating the bias when compared to vanilla model results. However, we note that we obtained a slight performance improvement when we used Oversampling mostly on the CelebA dataset in Table 4 and on the Cifar-s dataset in Table 5. Therefore, we use Oversampling to train the multi-class prediction head in our implementation.
Aside from Oversampling, note that other sampling methods, namely Undersampling and Upweighting, could not demonstrate the same improvement in performance. Both were comparable to Oversampling on the Binary Benchmark in Table 4. However, both lagged significantly behind the multiclass benchmark in Table 5.
C HEAVY MAKEUP BENCHMARK
Prior work Hong & Yang (2021) uses the Heavy Makeup binary attribute prediction task from CelebA [20] as a benchmark for bias mitigation, where Gender is the sensitive attribute. In this experiment, Heavy Makeup’s attribute is biased toward the sensitive group: Female. In our experiments, we found that this benchmark contains significant noise. We believe this noise stems from the fact that Heavy Makeup is subjective metric. Moreover, It is influenced by cultural elements, lighting conditions, and camera pose. Thus, we expect a fair amount of label noise, which we verify via a qualitative and quantitative analysis below.
Qualtiative Analysis: We sample random 5 images from the following subgroups: Female-Heavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup (Fig 4). It is clear from the Figure that there is no firm agreement about the definition of Heavy Makeup.
Quantitative Analysis: We sample 100 random images from the following subgroups: FemaleHeavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup. We calculated the percentage of noisy images for each subgroup, i.e. images that are not clear whether they correctly belong to their subgroup. Observe the results in Table 6. Note that there is a significant amount of noise in each subgroup. Furthermore, the noise is amplified for the subgroups that involve the sensitive group: Female. The noise is further amplified when the test set used in Hong & Yang (2021) is examined. The test set for Male-Heavy Make up (an under-represented subgroup) only contains 9 samples. we could not visually determine whether 4 out of these 9 images fall under Heavy MakeUp. Out of the 5 left images, 3 are images of the same person from different angles. Therefore, given the noise in the training set, the small size of the under-represented group in the test set, and its noise, we conclude that results from this benchmark will be pretty noisy. Therefore, we choose to skip it in our experiments.
D MODEL AND HYPER-PARAMETERS DETAILS
We test our method (Bias Mimicking) on two benchmarks. The first benchmark is Binary Classification Benchmark that includes two datasets: CelebA (Liu et al., 2015), UTK-Face (Zhang et al., 2017) as outlined in Section 4.1. We follow the same training procedure as prior work (Hong & Yang, 2021). For both datasets, we train a Resnet-18 (He et al., 2016) model. We train the model for a total of 10 epochs on CelebA and 20 epochs on UTK-Face. For both datasets, we use a learning rate of 1e − 3 with ADAM (Kingma & Ba, 2014) optimizer. We decay the learning rate following an exponential schedule at epochs 3 and 6 for CelebA and 7 and 14 for UTK-Face with γ = 0.1 for both datasets. During training, we augment the input images through a horizental flip.
The second benchmark is a Multi-Class Classification benchmark that makes use of the CIFAR-S dataset (Wang et al., 2020b). Following prior work (Wang et al., 2020b), we train a Resnet18 model for a total of 200 epochs. We train the model with Stochastic Gradient Descent (SGD) with learning rate 0.1, momentum 0.9. We use an exponential decay scheduler at epochs 50,100, and 200 with γ = 0.1. We augment the input images using a horizontal flip and a random crop.
Our method, as outlined in Section 3.2 trains a multi-class prediction head on top of the debiased feature space where the gradients are stopped from flowing back into the main network. At each epoch, we train the layer with the same hyper-parameters and total number of epochs outlined for the main model and then reset the layer at each epoch. | 1. What is the focus of the paper regarding visual recognition datasets?
2. What are the strengths of the proposed approach, particularly in its simplicity and effectiveness?
3. Do you have any concerns or questions about the role of the feature encoder in bias mimicking?
4. Can the bias mimicking idea be applied to re-weighting methods, and if not, why?
5. Are there any limitations to the method, such as requiring known bias in the dataset?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focused on the task of bias mitigation in visual recognition datasets, where there exist strong spurious correlations between class and a dataset-sensitive group. Previous works mainly focused on model-based solutions, which require additional parameters during inference or expensive hyper-parameter tuning for additional losses during training. To overcome these limitations, this work extended data re-sampling methods to bias mitigation tasks, and proposed a class-conditioned sampling method called Bias Mimicking (BM). BM creates a different version of dataset target labels where the bias within other classes mimics that of the target class. Besides, this paper provided an extensive analysis on sampling methods for bias mitigation tasks.
Strengths And Weaknesses
Strengths
+ Novel motivation. This paper focused on the task of bias mitigation in visual recognition datasets. While previous works mainly focused on model modification or loss design, this work proposed to extend data re-sampling strategies, which is more efficient and free of hyperparameters. The motivation is novel to me
+ Simple but efffective method. The method is consistent with the motivation. The idea is straightforward and makes sense.
+ Analysis of re-sampling methods in bias mitigation. This paper further provide an analysis of re-sampling methods in bias mitigation, which is lacking in previous studies. This analysis helps to understand the role of re-sampling methods and may inspire future research in this direction.
+ Satisfying writing quality. The paper is well-organized and written. It is easy and straightforward to understand the core ideas and contributions.
Weaknesses & Questions
- The role of feature encoder in bias mimicking. In Figure 2, the feature encoder is used for bias mimicking. It is not to me the role of feature encoder in bias mimicking. Based on my understanding, the bias mimicking is based on known bias rather than features. How are features used to mimick the bias?
- Extension of re-weighting. This work focused on how to adapt re-sampling to bias mitigation tasks, and also compared their method with both re-sampling and re-weighting baselines. I am wondering whether the bias mimicking idea can be used on re-weighting methods, as one can create a similar version by assigning samples with different weights. If not, what is the reason? Such discussion would make the paper more self-contained and complete.
- Whether bias is known. There are two settings for bias mitigation according to whether the bias of the dataset is known or available. According to the description of the method, it seems that this method can only work when the bias is known, which might be a limitation of this method. If so, it would be great to include the detailed discussion of limitations. Please correct my if I misunderstood the idea.
Clarity, Quality, Novelty And Reproducibility
The quality, clarity and originality of the work is good. Please see Strenths for detailed comments. |
ICLR | Title
Bias Mimicking: A Simple Sampling Approach for Bias Mitigation
Abstract
Prior work has shown that Visual Recognition datasets frequently under-represent sensitive groups (e.g. Female) within a category (e.g. Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and sensitive attributes such as age, gender, or race. Most of the recent methods that address this problem require significant architectural changes or expensive hyper-parameter tuning. Alternatively, data re-sampling baselines from the class imbalance literature (e.g. Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, we found that some of these baselines were missing from recent bias mitigation benchmarks. In this paper, we show that these simple methods are strikingly competitive with state-of-the-art bias mitigation methods on many datasets. Furthermore, we improve these methods by introducing a new class conditioned sampling method: Bias Mimicking. In cases where the baseline dataset re-sampling methods do not perform well, Bias Mimicking effectively bridges the performance gap and improves the total averaged accuracy of under-represented subgroups by over 3% compared to prior work.
1 INTRODUCTION
Spurious predictive correlations have been frequently documented within the Deep Learning literature (Wang et al., 2020a; Zhao et al., 2021). These correlations can arise when most samples in class y (e.g. blonde hair) belong to a sensitive group b (e.g. female). Thus, the model might learn to use the signal from b to predict y. Mitigating this spurious correlation (Bias) involves decorrelating the model’s predictions of y from the dataset-sensitive group b. Previous research efforts have primarily focused on model-based solutions. These efforts can be mainly categorized into two directions 1) methods that require significantly more model parameters during inference (Wang et al., 2020b), which harms model scalability as we increase the number of sensitive groups/target-classes. 2) methods that introduce additional loss functions and require expensive hyper-parameter tuning (Hong & Yang, 2021; Kim et al., 2019a; Ryu et al., 2017; Tartaglione et al., 2021).
Dataset re-sampling methods, popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018), present a simpler and cheaper alternative. Moreover, as illustrated in Figure 1(a), they can be easily extended to Bias Mitigation by considering the imbalance within the dataset subgroups rather than classes. Most common of these methods are Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Oversampling (Wang et al., 2020b). They mitigate class imbalance by altering the dataset distribution through dropping/repeating samples, respectively. Another similar solution is Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) which levels each sample contribution to the loss function by appropriately weightings its loss value. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant portion of the dataset, which could harm models’ predictive capacity. Moreover, Upweighting can be unstable when used with stochastic gradient descent (An et al., 2021). Finally, models trained with Oversampling, as shown by Wang et al. (2020b), are likely to overfit due to being exposed to repetitive sample copies.
To address these problems, we propose Bias Mimicking (BM): a class-conditioned sampling method that mitigates the shortcomings of prior work. As shown in Figure 1(b), for each y ∈ Y , BM maintains a different version of the dataset target labels dy . Each dy preserves y samples while
Undersampling y′ ̸= y such that the bias within class y′ mimics that of y. Consequently, for each dy , Y is statistically independent from B, i.e. Pdy (Y = y|B = b) = Pdy (Y = y). A naive way of using each dy is to dedicate a different multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, we binarize each dy . Then, we train a different one-vs-all binary classifier on each dy . BM addresses the shortcomings of prior work’s sampling methods. For example, using every dy to train the model exposes it to the entire dataset, whereas Undersampling can discard significant numbers of samples. Our approach also does not use weights to scale the loss function. Thus our method should avoid stability issues using stochastic gradient descent suffered by Upweighting. Finally, since each binary predictor is not exposed to repetitive samples, our model is less prone to overfitting.
In addition to proposing Bias Mimicking, another contribution of our work is providing an extensive analysis on sampling methods for bias mitigation tasks. We found many sampling-based methods were notably missing in the comparisons used in prior work (Tartaglione et al., 2021; Wang et al., 2020b; Hong & Yang, 2021). Despite their shortcomings, we show that Undersampling and Upweighting are surprisingly competitive on many bias mitigation benchmarks. Therefore, this emphasizes these methods’ importance as an inexpensive first choice for mitigating Bias. However, for cases where these methods are not as effective, Bias Mimicking effectively bridges the performance gap and improves over prior work model-based methods. Finally, we thoroughly analyze our approach’s behavior through two experiments. First, it is unclear how sensitive our method is to the mimicking condition. Therefore, in Section 4.3, we simulate various scenarios where the Bias is not perfectly mimicked and note model performance. Second, we verify the importance of each dy to the method predictive performance in Section 4.4. Both experiments showcase the importance of our design in mitigating Bias.
Our contributions can be summarized as:
• We show that simple resampling methods are be competitive on many benchmarks when compared to expensive model-based state-of-the-art approaches. • We introduce a novel resampling method: Bias Mimicking that improves the average accuracy over under-represented subgroups by 3% over multiple datasets. • We conduct an extensive empirical analysis of Bias Mimicking that details the method’s sensitivity to the Mimicking condition and uncovers various insights about its behavior.
2 RELATED WORK
Documenting Spurious Correlations: Bias in Machine Learning can manifest in many ways. Examples include class imbalance Japkowicz & Stephen (2002), historical human biases Suresh & Guttag (2019), evaluation bias et al. (2018), and more. For a full review, we refer the reader to Mehrabi et al. (2021). In our work, we are interested in model bias that arises from spurious correlations. A spurious correlation results from under-representing a certain group of samples (e.g. samples with color red) within a certain class (e.g. planes). This leads the model to learn the false relationship between the class and the over-represented group. Prior work has documented several occurrences of this bias. For example, Singh et al. (2020); Hendrycks et al. (2021); Xiao et al. (2020); Li et al. (2020) showed that state-of-the-art object recognition models are biased toward backgrounds or textures associated with the object class. Agrawal et al. (2018); Clark et al. (2019) showed similar spurious correlations in VQA. Zhao et al. (2021) noted concerning correlations between captions and attributes like skin color in Image-Captioning datasets. Beyond uncovering these correlations within datasets, prior work like Wang et al. (2020b); Hong & Yang (2021) introduced synthetic bias datasets where they systematically assessed bias effect on model performance.
Model based solutions: In response to documentation of dataset spurious correlations, several model focused methods have been proposed (Ryu et al., 2017; Kim et al., 2019a; Wang et al., 2020b; Hong & Yang, 2021; Tartaglione et al., 2021). For example, Tartaglione et al. (2021) presents a metric learning-based method where model feature space is regularized against learning harmful correlations. Wang et al. (2020b) surveys several existing methods such as adversarial training, that randomizes the relationship between target classes and sensitive groups in the feature space. They also present a new method, domain-independence, where different prediction heads are allocated for each sensitive group. Most recently, Hong & Yang (2021) extended contrastive learning frameworks on self-supervised learning (Chen et al., 2020; He et al., 2020; Khosla et al., 2020) to mitigate bias. Our work complements these efforts by introducing a hyper-parameter-free re-sampling algorithm that improves over state-of-the-art performers.
Dataset based solutions In addition to model-based approaches, we can mitigate spurious correlations by fixing the training dataset distribution. Examples include Oversampling minority classes and Undersampling majority ones, which are popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018). However, as we note in our introduction, some of these methods have been missing in recent visual Bias Mitigation Benchmarks (Wang et al., 2020b; Tartaglione et al., 2021; Hong & Yang, 2021). Thus, we review these methods and describe their shortcomings in Section 3.1. Alternatively, other efforts attempt to fix the dataset distribution by introducing completely new samples. Examples include (Kärkkäinen & Joo, 2021) where they introduce a new dataset for face recognition that is balanced among several race groups, and Ramaswamy et al. (2020) where they used GANs to generate training data that balance the sizes of dataset subgroups. While a dataset-based approach, our work differs from these efforts as it does not generate or introduce new samples. Finally, also related to our work are sampling methods like REPAIR (Li & Vasconcelos, 2019) where a function is learned to prioritize specific samples and, thus, learn more robust representations. However, unlike our method, REPAIR does not make use of sensitive group labels and thus is not able to target specific spurious correlations.
3 SAMPLING FOR BIAS MITIGATION
In bias mitigation the goal is to remove the effect of spurious correlations to prevent a model from correlating its predictions with sensitive groups of samples. More formally, assume we have a dataset of image/target/sensitive-group triplets (X,Y,B). Define gy,b = {(xi, yi, bi) s.t yi = y, bi = b}, i.e. the subgroup of samples that share the target class y and sensitive group b. Spurious correlations occur when the samples of a target class y (e.g. blonde hair) are over-represented by one sensitive group b (e.g. female) rather than distributed equally among the dataset sensitive groups. In other words, assuming PD(X,Y,B) is the distribution of the training data, then PD(B = b, Y = y) >> 1 |B| where |B| denotes the number of sensitive groups. Consequently, the model may use the signal from B to predict Y . For example, if most blonde hair samples were female, the model might learn to predict blonde hair every time it sees a female sample. Thus, the goal of this task is to train the model such that P (Ŷ |X,B) = P (Ŷ |X) where Ŷ denotes model predictions.
Our work addresses methods that fix spurious correlations by ensuring statistical Independence between Y and B on the dataset level, i.e. PD(Y |B) = P (Y ). As we note in our introduction, some of the solutions that fall under this approach are missing from prior work benchmarks. Therefore, We briefly review the missing methods in Section 3.1. Then, we introduce a new sampling method: Bias Mimicking which addresses prior work sampling methods shortcomings in Section 3.2.
3.1 SIMPLE SAMPLING METHODS
As discussed in the introduction, the class imbalance literature is rich with dataset re-sampling solutions (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018; Chawla et al., 2002). These solutions address class imbalance by balancing the distribution of classes. Thus, they can be extended to Bias Mitigation by balancing subgroups instead of classes. Prior work in Visual Bias Mitigation has explored one of these solutions: Oversampling (Wang et al., 2020b). This approach mitigates dataset bias by replicating the samples of the underrepresented subgroups until all subgroups are balanced. However, Wang et al. (2020b) has demonstrated that Oversampling is not effective at mitigating bias, likely because the model sees repetitive sample copies, thus resulting in overfitting. In addition to Oversampling, Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) are other popular methods. Both methods, however, have not been benchmarked in recent Visual Bias Mitigation work (Tartaglione et al., 2021; Hong & Yang, 2021). We review these solutions below.
Undersampling drops samples from the majority classes until all classes are balanced. We can extend this solution to Bias Mitigation by dropping samples to balance dataset subgroups. More concretely, we subsample every subgroup in the dataset to match the size of the smallest subgroup, i.e. min
y,b |gy,b|. However, a critical shortcoming of Undersampling is that it drops a significant portion of the dataset and thus could harm the model’s predictive capacity. This might explain its absence from recent bias mitigation benchmarks. We address this shortcoming in our proposed method, Bias Mimicking, in Section 3.2.
Upweighting Upweighting levels the contribution different samples to the loss function by multiplying its loss value by the inverse of the sample’s class frequency. We can extend this process to Bias Mitigation by simply considering subgroups instead of classes. More concretely, assume model weights w, sample x, class y, and subgroup gy,b where x ∈ gy,b, then the model optimizes:
L = Ex,y,g [ 1 pgy,b l(x, y;w) ]
where pgy,b = |gy,b|∑ y,b |gy,b|
computed over the training dataset. A key shortcoming of Upweighting is its instability when used with stochastic gradient descent (An et al., 2021). Indeed, we demonstrate this problem in our experiments where Upweighting does not work well on some datasets.
3.2 BIAS MIMICKING
The goal in our method is to decorrelate a model’s predictions Ŷ from sensitive attributes B. Our approach is inspired by Sampling methods, which enforce this independence on the dataset level (i.e.PD(Y |B) = PD(Y )). However, simple sampling methods like Oversampling, Undersampling, and Upweighting suffer from critical shortcomings as outlined in Section 3.1. We address these shortcomings in our proposed sampling algorithm: Bias Mimicking below.
Algorithm The key principle of our algorithm is the observation that if bias toward sensitive group b was proportionally equal among all the target classes, then for every y ∈ Y , P (Y = y|B = b) = P (Y = y), i.e. y is statically independent from b. More concretely:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Refer to Appendix A for proof. Note that ensuring this proposition holds for every b ∈ B implies that P (Y |B) = P (Y ), in other words Y is independant from B which in turns means that the resulting model’s predictions will not be strongly correlated with B. Refer to Section 4.3 for an empirical sensitivity analysis of this result. Note how under-sampling is a special case of this result where indeed Y is independent of B.
We use this result to motivate a novel way of Undersampling the input distribution that ensures the model is exposed to every sample in the dataset while preventing a spurious correlation. To that end, for every class y, we create a different version of the dataset target labels. Denote each version as dy . Each dy preserves its respective class y samples while Undersampling every y′ ̸= y such that
Pdy (B = b|Y = y) = Pdy (B = b|Y = y′) ∀b ∈ B (1)
As a result, each dy ensures the independence of Y from B. A naive way of using each dy would be to dedicate a multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, We binarize each dy and then use it train a one-vs-all binary classifier BPy for each y. Each head is trained on image-target pairs from its corresponding distribution dy as Figure 2 demonstrates. Consequently, since each dy preserves y samples, the model backbone sees the entire dataset through the different signals backpropagating from each BPy . Moreover, since the same bias is mimicked for every class in dy , each incoming signal from BPy does not backpropogate spurious correlations, thus, Y correlation with B should be minimized within the model learned feature representation.
Using the binary classifiers during inference is challenging since each was trained on different distributions and are, therefore, uncalibrated compared to each other. However, we know that Bias Mimicking minimizes the correlation between Y and B within the feature space. Thus, we exploit this fact by training a multi-class prediction head over the feature space while preserving the original dataset distribution. Note that we stop the gradient from flowing into the model backbone to ensure that the bias is not learned again. In our experiments, we found that this achieves state-ofthe-art performance on many benchmarks. However, on some, we found that we could obtain a slight improvement in performance by Oversampling the distribution of the features used to train the multi-class prediction head. Oversampling, unlike when used to train the entire model, did not overfit in this setting because 1) the multi-class prediction head is a simple linear layer which is less likely to overfit than an overparameterized model like a neural net 2) the features learned by our
method do not take gradients from the oversampled approach, helping to ensure that the neural net can not overfit over the reptitive samples. Refer to Appendix B for further analysis. Given these results, we use Oversampling to train the multi-class prediction head throughout our experiments.
Cost Analysis Unlike prior work (Hong & Yang, 2021; Tartaglione et al., 2021), bias mimicking involves no extra hyper-parameters. The debiasing is automatic by definition. Moreover, unlike prior work (Wang et al., 2020b) where an that used each prediction head trained on every protected attribute (i.e. O(B)), bias mimicking uses only a single prediction head during inference (i.e. O(1)), discarding the attribute specific prediction heads at test time. Therefore, our method is simpler and more efficient during inference. Finally, note that the additional target labels dy do not result in longer epochs; we make one pass only over each sample x during an epoch and apply its contribution to the relevant BPy(s) accordingly.
4 EXPERIMENTS
We report our method performance on two benchmarks: a binary classification benchmark in Section 4.1 and a multi-class classification benchmark in Section 4.2. In addition to reporting our method results, we expand both benchmarks, which mainly focus on model-based methods, by including the basic sampling methods outlined in Section 3, namely Undersampling, Upweighting, and Oversampling. Then, we follow up our results with two main experiments that analyze our method’s behavior. The first experiment in Section 4.3 is a sensitivity analysis of our method to the mimicking condition. The second experiment in Section 4.4 analyzes the contribution of each version of the target labels dy to model performance.
4.1 BINARY CLASSIFICATION BENCHMARK
Datasets and MetricsWe compare methods using CelebA dataset (Liu et al., 2015) and UTKFace dataset (Zhang et al., 2017). Following prior work (Hong & Yang, 2021; Tartaglione et al., 2021), we train a binary classification model using CelebA where BlondHair is a target attribute and Gender is a bias attribute. Note that prior work (Hong & Yang, 2021; Tartaglione et al., 2021) used the HeavyMakeUp attribute in their CelebA benchmark. However, during our experiments, we found serious problems with the benchmark. Refer to Appendix C for model details. Therefore, we skip
this benchmark in our work. For UTKFace, we follow Hong & Yang (2021) and do the binary classification with Race/Age as the sensitive attribute and Gender as the target attribute. Methods are evaluated in terms of Unbiased Accuracy (Hong & Yang, 2021) which measures the accuracy on a test set balanced among each subgroup, and Bias-Conflict (Hong & Yang, 2021) which measures the accuracy on the minority subgroups. Refer to Appendix D for more details.
Baselines We use the baselines reported in prior work (Hong & Yang, 2021) benchmark. These methods either require additional model parameters or hyper-parameters. ”Bias-Contrastive and Bias-Balanced Learning” (BC + BB) (Hong & Yang, 2021) uses a contrastive learning framework to mitigate bias. ”Learning Not to Learn” (LNL) uses an additional prediction branch to minimize the mutual information between the model feature and the bias label. Domain-independent (DI) (Wang et al., 2020b) uses an additional prediction head for each sensitive subgroup. EnD (Tartaglione et al., 2021) uses a regularizer that disentangles the features of the same bias class samples. Furthermore, we expand the benchmark by reporting the performance of sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results in the binary classification benchmark in Table 1 indicates that our method (BM) maintains strong averaged predictive accuracy (UA) while significantly improving performance over the under-represented subgroups (BC) by > 3% when compared to prior work model-based methods. Moreover, our method’s strong performance does not require additional expensive hyper-parameter tuning or model parameters, unlike prior work methods. Therefore, our method is simpler and more efficient. Furthermore, note that our method performs the best among the sampling methods. More concretely, BM performs significantly better than Oversampling. This aligns with (Wang et al., 2020b) observation that overparameterized models like neural nets tend to overfit when trained on Oversampled distributions. Note that even though our method uses Oversampling to train part of the network, we do not see the same decline in performance. This is because, as discussed in Section 3.2, we only use the oversampled version of the dataset to train a simple linear layer over the learned debiased features. Undersampling demonstrates poor predictive performance on UTK-Face due to Undersampling a significant portion of the dataset, which is then reflected in the average performance. Surprisingly, however, the method performs quite well on CelebA. This is likely because the task (predicting hair color) is easy and does not require many samples. Thus, this emphasizes the method’s importance as a reasonable first step in case sufficient data is available. Overall, however, the average performance of our method is significantly better. Finally, note that while Upweight-
ing demonstrates competitive performance on CelebA, it lags behind significantly over Utk-Face. This is likely due to the method’s instability with respect to stochastic gradient descent (An et al., 2021). Thus, when comparing the method’s averaged performance to ours, we note that our method improves performance by over 1%.
4.2 MULTI-CLASS CLASSIFICATION BENCHAMRK
Datasets and Metrics We use the bias controlled CIFAR-S benchmark (Wang et al., 2020b). The benchmark synthetically introduces bias into the CIFAR-10 dataset (Krizhevsky, 2012) by converting a sub-sample of images from each target class to gray-scale. We use (Wang et al., 2020b) version where half of the classes are biased toward color and the other half is biased toward gray where the dominant sensitive group in each target represent 95% of the samples. Methods are evaluated using the accuracy on two versions of the test sets: color and gray as well as the mean of both sets. In addition to accuracy, methods are evaluated with the bias amplification metric (Wang et al., 2020b).
Baselines We use the baselines reported in prior work (Wang et al., 2020b) benchmark. These methods either require additional model parameters or hyper-parameters. Adversarial Learning (Adv) w/ uniform confusion (Hoffman et al., 2015) and reversal projection (Zhang et al., 2018), introduces an adversarial loss that seeks to randomize model’s feature representation of Y and B, Domain Discriminative training (Wang et al., 2020b) and Domain Independence (DI) (Wang et al., 2020b) both introduce a prediction head for each dataset subgroup. However, they use different methods to train and perform inference. Refer to (Wang et al., 2020b) for more information. Furthermore, we expand the benchmark by including the sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results Observe the results in Table 2. Note that our method (BM) is able to maintain competitive performance over prior work state of the art (DI) while requiring no additional hyperparameters or model parameters at inference time. Moreover, when considering DI poor performance on the Binary Classification benchmark in Table 1, our method averaged performance with the results from Table 2 is significantly better. More notably, it is the only sampling method that maintains both strong predictive performance and significantly reduces the bias. Upweighting weaker predictive performance here can be attributed to its instability with respect to stochastic gradient descent (An et al., 2021). While Undersampling reduces bias significantly compared to prior work, it lags significantly with its predictive performance.
4.3 HOW SENSITIVE IS THE MODEL PERFORMANCE TO THE MIMICKING CONDITION?
Bias Mimicking is effective at mitigating Bias. However, it is not clear how sensitive the model’s learned spurious predictive behavior is to the Bias mimicking condition (Equation 1). Therefore, in this section, we seek to answer this question. To that end, we test multiple scenarios where the Bias is mimicked by a percentage value x ∈ {0, 100} such that 0% corresponds to no modification to the distribution and 100% corresponds to complete Bias mimicking. Following this definition,
we run the experiment on the datasets and attributes introduced by the Binary Classification Benchmark in Section 4.1 and evaluate performance using the metrics introduced in Section 4.1, namely Bias Conflict (BC) and Unbiased Accuracy (UA). Observe the result in Figure 3. Note that as the percentage of Bias Mimicked decreases, the BC and UA decrease as expected. This is because P (Y |B) ̸= P (Y ) following proposition 1. The best performance is achieved when x% is indeed 100% and thus P (Y |B) = P (Y ). From this analysis, we conclude that the Bias Mimicking condition is critical for good performance.
4.4 HOW DOES EACH dy AFFECT MODEL PERFORMANCE?
Bias Mimicking takes in the original dataset distribution of target labels and produces a different version for each class y, denoted as dy , where class y samples are preserved, and the bias of y is mimcked in the other classes. By using every dy , we expose the model to the entire dataset. In this section, we investigate the effect of each dy on performance. To that end, we perform an experiment using the binary classification tasks outlined in Section 4.1. For each task, thus, we have two versions of the dataset target labels d1, d2. We compare three different models performances: model (1) trained on only (d1), model (2) trained on only (d2) and finally model (3) trained on both (d1, d2). The last version being the one used by our method. We use the Unbiased Accuracy Metric (UA). We also break down the metric into two versions: UA1 where accuracy is averaged over class 1 subgroups, and UA2 where accuracy is averaged over class 2 subgroups. Note that, overall, the model trained on (d1) performs better at predicting y1 but worse at predicting y2 as outlined by results in Table 3. We note the same trend with the model trained on (d2) but in reverse. This disparity in performance harms the Unbiased Accuracy. However, a model trained on (d1, d2) balances both accuracies and achieves the best total Unbiased Accuracy (UA). These results emphasize the importance of each dy for good performance.
5 CONCLUSION
We first recognized that simple and hyper-parameter-free methods for bias mitigation like Undersampling and Upweighting were missing from recent benchmarks. This observation was especially relevant considering that state-of-the-art solutions require either expensive hyper-parameter tuning or architectural change. Therefore, we benchmarked these methods and concluded that some are surprisingly effective on many datasets. However, on some others, their performance significantly lagged behind model-based methods. Motivated by this observation, we introduced a novel sampling method: Bias Mimicking. The method retained the simplicity of sampling methods while improving over state-of-the-art model-based methods. Furthermore, we extensively analyzed the behavior of Bias Mimicking, which emphasized the importance of our design.
Future Work We demonstrate that dataset re-sampling methods are simple and effective tools in mitigating bias. However, we recognize that the explored bias scenarios in prior and our work are still limited. For example, current studies only consider mutually exclusive sensitive groups. Thus, a sample can belong to only one sensitive group at a time. How does relaxing this assumption, i.e. intersectionality, impact bias? Finally, the dataset re-sampling methods presented in this work are effective at mitigating bias when enough samples from all dataset subgroups are available. However, it is unclear how these methods could handle bias scenarios when one class is completely biased by one sensitive group. These are all questions that we hope future work could address.
Code of Ethics Statement Our work addresses a critical problem within the fairness literature: spurious predictive behavior by Deep Learning models. We measure this behavior by calculating the dataset’s subgroups’ predictive performance (accuracy). While this metric aligns with our goal of measuring and preventing spurious behavior, we emphasize that the metric is not exhaustive of other fairness concerns. We refer the reader to (Kleinberg et al., 2017) for a broader discussion of fairness metrics. Furthermore, while our proposed method aims to learn robust representations for underrepresented subgroups within a given dataset, we acknowledge that such representations could be misused in downstream applications that could raise ethical concerns (e.g. surveillance). Furthermore, two of the datasets used in our experiments involve facial attribute recognition tasks (e.g. hair color). Models trained on these datasets could be misused, yet they remain standard benchmarks for evaluating spurious correlations. We encourage future researchers and practitioners to use this technology with care and caution.
Reproducibility Statement We provide the source code for this work in the supplementary. The training and hyper-parameters to reproduce the results are outlined in Appendix D. Moreover, The results we report in this paper were averaged over 3 runs with different seeds, and standard deviations were reported to ensure statistical significance. Furthermore, all experiments were done on datasets that are publicly available online.
A PROPOSITIONS AND PROOFS
The central component of our proposed method is the insight that if the the bias toward sensitive group b was proportionally equal among every y ∈ Y , then P (Y = y|B = b) = P (Y = y). We provide a proof for this proposition below:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Proof. First, given the law of total probability: P (B = b) = ∑ y′∈Y P (B = b|Y = y′)P (Y = y′)
Given our assumption of bias mimicking, we can write ∀y ∈ Y : P (B = b) = P (B = b|Y = y) ∑ y′∈Y P (Y = y′) = P (B = b|Y = y) (2)
From here, using Bayesian probability and the result from (2), we can write, ∀y ∈ Y :
P (Y = y|B = b) = P (B = b|Y = y)P (Y = y) P (B = b) = P (B = b)P (Y = y) P (B = b) = P (Y = y)
B SAMPLING METHODS IMPACT ON MULTI-CLASS CLASSIFICATION HEAD
Bias Mimicking produces a binary version dy of the labels for each class y. Each dy preserves class y samples while undersampling each y′ such that the bias within y′ mimics y. A debiased feature representation is then learned by training a binary classifier for each dy . When the training is done, it is challenging to use the scores from each binary predictor for inference. This is because each predictor is trained on a different distribution of the data, so the predictors are uncalibrated with respect to each other. Therefore, to perform inference, we train a multi-class prediction head on top of the learned feature representations using the original dataset distribution, where we prevent the gradients from flowing into the feature space. Observe that this approach denoted by BM + Vanilla in Table 4 and Table 5 is effective at mitigating the bias when compared to vanilla model results. However, we note that we obtained a slight performance improvement when we used Oversampling mostly on the CelebA dataset in Table 4 and on the Cifar-s dataset in Table 5. Therefore, we use Oversampling to train the multi-class prediction head in our implementation.
Aside from Oversampling, note that other sampling methods, namely Undersampling and Upweighting, could not demonstrate the same improvement in performance. Both were comparable to Oversampling on the Binary Benchmark in Table 4. However, both lagged significantly behind the multiclass benchmark in Table 5.
C HEAVY MAKEUP BENCHMARK
Prior work Hong & Yang (2021) uses the Heavy Makeup binary attribute prediction task from CelebA [20] as a benchmark for bias mitigation, where Gender is the sensitive attribute. In this experiment, Heavy Makeup’s attribute is biased toward the sensitive group: Female. In our experiments, we found that this benchmark contains significant noise. We believe this noise stems from the fact that Heavy Makeup is subjective metric. Moreover, It is influenced by cultural elements, lighting conditions, and camera pose. Thus, we expect a fair amount of label noise, which we verify via a qualitative and quantitative analysis below.
Qualtiative Analysis: We sample random 5 images from the following subgroups: Female-Heavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup (Fig 4). It is clear from the Figure that there is no firm agreement about the definition of Heavy Makeup.
Quantitative Analysis: We sample 100 random images from the following subgroups: FemaleHeavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup. We calculated the percentage of noisy images for each subgroup, i.e. images that are not clear whether they correctly belong to their subgroup. Observe the results in Table 6. Note that there is a significant amount of noise in each subgroup. Furthermore, the noise is amplified for the subgroups that involve the sensitive group: Female. The noise is further amplified when the test set used in Hong & Yang (2021) is examined. The test set for Male-Heavy Make up (an under-represented subgroup) only contains 9 samples. we could not visually determine whether 4 out of these 9 images fall under Heavy MakeUp. Out of the 5 left images, 3 are images of the same person from different angles. Therefore, given the noise in the training set, the small size of the under-represented group in the test set, and its noise, we conclude that results from this benchmark will be pretty noisy. Therefore, we choose to skip it in our experiments.
D MODEL AND HYPER-PARAMETERS DETAILS
We test our method (Bias Mimicking) on two benchmarks. The first benchmark is Binary Classification Benchmark that includes two datasets: CelebA (Liu et al., 2015), UTK-Face (Zhang et al., 2017) as outlined in Section 4.1. We follow the same training procedure as prior work (Hong & Yang, 2021). For both datasets, we train a Resnet-18 (He et al., 2016) model. We train the model for a total of 10 epochs on CelebA and 20 epochs on UTK-Face. For both datasets, we use a learning rate of 1e − 3 with ADAM (Kingma & Ba, 2014) optimizer. We decay the learning rate following an exponential schedule at epochs 3 and 6 for CelebA and 7 and 14 for UTK-Face with γ = 0.1 for both datasets. During training, we augment the input images through a horizental flip.
The second benchmark is a Multi-Class Classification benchmark that makes use of the CIFAR-S dataset (Wang et al., 2020b). Following prior work (Wang et al., 2020b), we train a Resnet18 model for a total of 200 epochs. We train the model with Stochastic Gradient Descent (SGD) with learning rate 0.1, momentum 0.9. We use an exponential decay scheduler at epochs 50,100, and 200 with γ = 0.1. We augment the input images using a horizontal flip and a random crop.
Our method, as outlined in Section 3.2 trains a multi-class prediction head on top of the debiased feature space where the gradients are stopped from flowing back into the main network. At each epoch, we train the layer with the same hyper-parameters and total number of epochs outlined for the main model and then reset the layer at each epoch. | 1. What is the focus of the paper regarding reducing bias in classifier predictions?
2. What are the strengths and weaknesses of the proposed approach, particularly in its ability to apply to multi-class problems?
3. Do you have any concerns or questions regarding the presentation and explanation of the method, including the value range and obtainment of d y ?
4. How does the method differ from naive undersampling on binary classification datasets?
5. Would including more attributes on CelebA improve the experiment results?
6. Are there any important works on bias mitigation missing from the paper, such as Group DRO?
7. Are the additional binary classification heads considered additional model parameters?
8. Can the authors clarify the argument on page 5 regarding preserving y samples?
9. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work proposes a resampling-based approach to reduce bias of classifier predictions towards a sensitive attribute. The method, termed bias mimicking, first converts a K-way classification into K binary problems, and undersamples the data for each problem to achieve statistical independence between target label and bias attribute. A multi-way classification head is then trained on top of the frozen feature extractor to make the final prediction. Bias mimicking achieves state-of-the-art results on binary and mult-class datasets.
Strengths And Weaknesses
Pros
The overall idea of removing statistical dependency between target and bias attributes makes intuitive sense. I agree with authors that data imbalance is the main source of model bias, and that more effort should be devoted towards fixing the bias in the training data.
Unlike most prior works on debiasing that focused on binary classification, this work can be applied to multi-class problems.
The proposed method produces similar results to strong debiasing baselines, while not requiring additional modules or hyperparameter tuning. Main results are provided with mean/std over multiple runs.
Cons
I feel that the presentation of bias mimicking (sec 3.2) needs more work. The current version lacks clarity, making the key idea of the work quite hard to comprehend.
d
y
is defined as “a different version of target label” for class
y
, without further explanation on its value range, or how it is obtained from
y
.
It is unclear what
P
d
y
means in equation 1, or how the resampling should be performed to ensure the statistical independence.
Can the authors elaborate on how the method differs from naive undersampling on binary classification datasets?
While figure 2 helps understanding the method in an abstract level, it would be ideal to include an algorithm that details each step of the training procedure.
On experiments:
On binary attributes it would be ideal if more attributes on CelebA are included (e.g. following BPA). While some attributes like "heavy makeup" is subjective and noisy (as identified in this work), others such as "wearing lipstick", "double chin" might be good options.
The paper is missing some important work on bias mitigation such as Group DRO, which optimizes a worst-group training loss combined with group-balanced resampling.
I wonder if the additional binary classification heads count as additional model parameters in table 1 and 2, or are the extra parameters negligible compared to baseline models?
Minor errors / typos
In page 5, I did not understand the argument "Consequently, since each
d
y
preserves
y
samples...". Can the authors clarify its meaning?
Clarity, Quality, Novelty And Reproducibility
As discussed above, I am concerned about the clarity of the writing. Without fully understanding the mechanisms of the proposed bias mimicking approach, it is difficult to evaluate the importance of its contribution relative to prior methods based on resampling. On the brighter side, training code is provided which adds to the reproducibility of the work. |
ICLR | Title
Bias Mimicking: A Simple Sampling Approach for Bias Mitigation
Abstract
Prior work has shown that Visual Recognition datasets frequently under-represent sensitive groups (e.g. Female) within a category (e.g. Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and sensitive attributes such as age, gender, or race. Most of the recent methods that address this problem require significant architectural changes or expensive hyper-parameter tuning. Alternatively, data re-sampling baselines from the class imbalance literature (e.g. Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, we found that some of these baselines were missing from recent bias mitigation benchmarks. In this paper, we show that these simple methods are strikingly competitive with state-of-the-art bias mitigation methods on many datasets. Furthermore, we improve these methods by introducing a new class conditioned sampling method: Bias Mimicking. In cases where the baseline dataset re-sampling methods do not perform well, Bias Mimicking effectively bridges the performance gap and improves the total averaged accuracy of under-represented subgroups by over 3% compared to prior work.
1 INTRODUCTION
Spurious predictive correlations have been frequently documented within the Deep Learning literature (Wang et al., 2020a; Zhao et al., 2021). These correlations can arise when most samples in class y (e.g. blonde hair) belong to a sensitive group b (e.g. female). Thus, the model might learn to use the signal from b to predict y. Mitigating this spurious correlation (Bias) involves decorrelating the model’s predictions of y from the dataset-sensitive group b. Previous research efforts have primarily focused on model-based solutions. These efforts can be mainly categorized into two directions 1) methods that require significantly more model parameters during inference (Wang et al., 2020b), which harms model scalability as we increase the number of sensitive groups/target-classes. 2) methods that introduce additional loss functions and require expensive hyper-parameter tuning (Hong & Yang, 2021; Kim et al., 2019a; Ryu et al., 2017; Tartaglione et al., 2021).
Dataset re-sampling methods, popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018), present a simpler and cheaper alternative. Moreover, as illustrated in Figure 1(a), they can be easily extended to Bias Mitigation by considering the imbalance within the dataset subgroups rather than classes. Most common of these methods are Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Oversampling (Wang et al., 2020b). They mitigate class imbalance by altering the dataset distribution through dropping/repeating samples, respectively. Another similar solution is Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) which levels each sample contribution to the loss function by appropriately weightings its loss value. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant portion of the dataset, which could harm models’ predictive capacity. Moreover, Upweighting can be unstable when used with stochastic gradient descent (An et al., 2021). Finally, models trained with Oversampling, as shown by Wang et al. (2020b), are likely to overfit due to being exposed to repetitive sample copies.
To address these problems, we propose Bias Mimicking (BM): a class-conditioned sampling method that mitigates the shortcomings of prior work. As shown in Figure 1(b), for each y ∈ Y , BM maintains a different version of the dataset target labels dy . Each dy preserves y samples while
Undersampling y′ ̸= y such that the bias within class y′ mimics that of y. Consequently, for each dy , Y is statistically independent from B, i.e. Pdy (Y = y|B = b) = Pdy (Y = y). A naive way of using each dy is to dedicate a different multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, we binarize each dy . Then, we train a different one-vs-all binary classifier on each dy . BM addresses the shortcomings of prior work’s sampling methods. For example, using every dy to train the model exposes it to the entire dataset, whereas Undersampling can discard significant numbers of samples. Our approach also does not use weights to scale the loss function. Thus our method should avoid stability issues using stochastic gradient descent suffered by Upweighting. Finally, since each binary predictor is not exposed to repetitive samples, our model is less prone to overfitting.
In addition to proposing Bias Mimicking, another contribution of our work is providing an extensive analysis on sampling methods for bias mitigation tasks. We found many sampling-based methods were notably missing in the comparisons used in prior work (Tartaglione et al., 2021; Wang et al., 2020b; Hong & Yang, 2021). Despite their shortcomings, we show that Undersampling and Upweighting are surprisingly competitive on many bias mitigation benchmarks. Therefore, this emphasizes these methods’ importance as an inexpensive first choice for mitigating Bias. However, for cases where these methods are not as effective, Bias Mimicking effectively bridges the performance gap and improves over prior work model-based methods. Finally, we thoroughly analyze our approach’s behavior through two experiments. First, it is unclear how sensitive our method is to the mimicking condition. Therefore, in Section 4.3, we simulate various scenarios where the Bias is not perfectly mimicked and note model performance. Second, we verify the importance of each dy to the method predictive performance in Section 4.4. Both experiments showcase the importance of our design in mitigating Bias.
Our contributions can be summarized as:
• We show that simple resampling methods are be competitive on many benchmarks when compared to expensive model-based state-of-the-art approaches. • We introduce a novel resampling method: Bias Mimicking that improves the average accuracy over under-represented subgroups by 3% over multiple datasets. • We conduct an extensive empirical analysis of Bias Mimicking that details the method’s sensitivity to the Mimicking condition and uncovers various insights about its behavior.
2 RELATED WORK
Documenting Spurious Correlations: Bias in Machine Learning can manifest in many ways. Examples include class imbalance Japkowicz & Stephen (2002), historical human biases Suresh & Guttag (2019), evaluation bias et al. (2018), and more. For a full review, we refer the reader to Mehrabi et al. (2021). In our work, we are interested in model bias that arises from spurious correlations. A spurious correlation results from under-representing a certain group of samples (e.g. samples with color red) within a certain class (e.g. planes). This leads the model to learn the false relationship between the class and the over-represented group. Prior work has documented several occurrences of this bias. For example, Singh et al. (2020); Hendrycks et al. (2021); Xiao et al. (2020); Li et al. (2020) showed that state-of-the-art object recognition models are biased toward backgrounds or textures associated with the object class. Agrawal et al. (2018); Clark et al. (2019) showed similar spurious correlations in VQA. Zhao et al. (2021) noted concerning correlations between captions and attributes like skin color in Image-Captioning datasets. Beyond uncovering these correlations within datasets, prior work like Wang et al. (2020b); Hong & Yang (2021) introduced synthetic bias datasets where they systematically assessed bias effect on model performance.
Model based solutions: In response to documentation of dataset spurious correlations, several model focused methods have been proposed (Ryu et al., 2017; Kim et al., 2019a; Wang et al., 2020b; Hong & Yang, 2021; Tartaglione et al., 2021). For example, Tartaglione et al. (2021) presents a metric learning-based method where model feature space is regularized against learning harmful correlations. Wang et al. (2020b) surveys several existing methods such as adversarial training, that randomizes the relationship between target classes and sensitive groups in the feature space. They also present a new method, domain-independence, where different prediction heads are allocated for each sensitive group. Most recently, Hong & Yang (2021) extended contrastive learning frameworks on self-supervised learning (Chen et al., 2020; He et al., 2020; Khosla et al., 2020) to mitigate bias. Our work complements these efforts by introducing a hyper-parameter-free re-sampling algorithm that improves over state-of-the-art performers.
Dataset based solutions In addition to model-based approaches, we can mitigate spurious correlations by fixing the training dataset distribution. Examples include Oversampling minority classes and Undersampling majority ones, which are popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018). However, as we note in our introduction, some of these methods have been missing in recent visual Bias Mitigation Benchmarks (Wang et al., 2020b; Tartaglione et al., 2021; Hong & Yang, 2021). Thus, we review these methods and describe their shortcomings in Section 3.1. Alternatively, other efforts attempt to fix the dataset distribution by introducing completely new samples. Examples include (Kärkkäinen & Joo, 2021) where they introduce a new dataset for face recognition that is balanced among several race groups, and Ramaswamy et al. (2020) where they used GANs to generate training data that balance the sizes of dataset subgroups. While a dataset-based approach, our work differs from these efforts as it does not generate or introduce new samples. Finally, also related to our work are sampling methods like REPAIR (Li & Vasconcelos, 2019) where a function is learned to prioritize specific samples and, thus, learn more robust representations. However, unlike our method, REPAIR does not make use of sensitive group labels and thus is not able to target specific spurious correlations.
3 SAMPLING FOR BIAS MITIGATION
In bias mitigation the goal is to remove the effect of spurious correlations to prevent a model from correlating its predictions with sensitive groups of samples. More formally, assume we have a dataset of image/target/sensitive-group triplets (X,Y,B). Define gy,b = {(xi, yi, bi) s.t yi = y, bi = b}, i.e. the subgroup of samples that share the target class y and sensitive group b. Spurious correlations occur when the samples of a target class y (e.g. blonde hair) are over-represented by one sensitive group b (e.g. female) rather than distributed equally among the dataset sensitive groups. In other words, assuming PD(X,Y,B) is the distribution of the training data, then PD(B = b, Y = y) >> 1 |B| where |B| denotes the number of sensitive groups. Consequently, the model may use the signal from B to predict Y . For example, if most blonde hair samples were female, the model might learn to predict blonde hair every time it sees a female sample. Thus, the goal of this task is to train the model such that P (Ŷ |X,B) = P (Ŷ |X) where Ŷ denotes model predictions.
Our work addresses methods that fix spurious correlations by ensuring statistical Independence between Y and B on the dataset level, i.e. PD(Y |B) = P (Y ). As we note in our introduction, some of the solutions that fall under this approach are missing from prior work benchmarks. Therefore, We briefly review the missing methods in Section 3.1. Then, we introduce a new sampling method: Bias Mimicking which addresses prior work sampling methods shortcomings in Section 3.2.
3.1 SIMPLE SAMPLING METHODS
As discussed in the introduction, the class imbalance literature is rich with dataset re-sampling solutions (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018; Chawla et al., 2002). These solutions address class imbalance by balancing the distribution of classes. Thus, they can be extended to Bias Mitigation by balancing subgroups instead of classes. Prior work in Visual Bias Mitigation has explored one of these solutions: Oversampling (Wang et al., 2020b). This approach mitigates dataset bias by replicating the samples of the underrepresented subgroups until all subgroups are balanced. However, Wang et al. (2020b) has demonstrated that Oversampling is not effective at mitigating bias, likely because the model sees repetitive sample copies, thus resulting in overfitting. In addition to Oversampling, Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) are other popular methods. Both methods, however, have not been benchmarked in recent Visual Bias Mitigation work (Tartaglione et al., 2021; Hong & Yang, 2021). We review these solutions below.
Undersampling drops samples from the majority classes until all classes are balanced. We can extend this solution to Bias Mitigation by dropping samples to balance dataset subgroups. More concretely, we subsample every subgroup in the dataset to match the size of the smallest subgroup, i.e. min
y,b |gy,b|. However, a critical shortcoming of Undersampling is that it drops a significant portion of the dataset and thus could harm the model’s predictive capacity. This might explain its absence from recent bias mitigation benchmarks. We address this shortcoming in our proposed method, Bias Mimicking, in Section 3.2.
Upweighting Upweighting levels the contribution different samples to the loss function by multiplying its loss value by the inverse of the sample’s class frequency. We can extend this process to Bias Mitigation by simply considering subgroups instead of classes. More concretely, assume model weights w, sample x, class y, and subgroup gy,b where x ∈ gy,b, then the model optimizes:
L = Ex,y,g [ 1 pgy,b l(x, y;w) ]
where pgy,b = |gy,b|∑ y,b |gy,b|
computed over the training dataset. A key shortcoming of Upweighting is its instability when used with stochastic gradient descent (An et al., 2021). Indeed, we demonstrate this problem in our experiments where Upweighting does not work well on some datasets.
3.2 BIAS MIMICKING
The goal in our method is to decorrelate a model’s predictions Ŷ from sensitive attributes B. Our approach is inspired by Sampling methods, which enforce this independence on the dataset level (i.e.PD(Y |B) = PD(Y )). However, simple sampling methods like Oversampling, Undersampling, and Upweighting suffer from critical shortcomings as outlined in Section 3.1. We address these shortcomings in our proposed sampling algorithm: Bias Mimicking below.
Algorithm The key principle of our algorithm is the observation that if bias toward sensitive group b was proportionally equal among all the target classes, then for every y ∈ Y , P (Y = y|B = b) = P (Y = y), i.e. y is statically independent from b. More concretely:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Refer to Appendix A for proof. Note that ensuring this proposition holds for every b ∈ B implies that P (Y |B) = P (Y ), in other words Y is independant from B which in turns means that the resulting model’s predictions will not be strongly correlated with B. Refer to Section 4.3 for an empirical sensitivity analysis of this result. Note how under-sampling is a special case of this result where indeed Y is independent of B.
We use this result to motivate a novel way of Undersampling the input distribution that ensures the model is exposed to every sample in the dataset while preventing a spurious correlation. To that end, for every class y, we create a different version of the dataset target labels. Denote each version as dy . Each dy preserves its respective class y samples while Undersampling every y′ ̸= y such that
Pdy (B = b|Y = y) = Pdy (B = b|Y = y′) ∀b ∈ B (1)
As a result, each dy ensures the independence of Y from B. A naive way of using each dy would be to dedicate a multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, We binarize each dy and then use it train a one-vs-all binary classifier BPy for each y. Each head is trained on image-target pairs from its corresponding distribution dy as Figure 2 demonstrates. Consequently, since each dy preserves y samples, the model backbone sees the entire dataset through the different signals backpropagating from each BPy . Moreover, since the same bias is mimicked for every class in dy , each incoming signal from BPy does not backpropogate spurious correlations, thus, Y correlation with B should be minimized within the model learned feature representation.
Using the binary classifiers during inference is challenging since each was trained on different distributions and are, therefore, uncalibrated compared to each other. However, we know that Bias Mimicking minimizes the correlation between Y and B within the feature space. Thus, we exploit this fact by training a multi-class prediction head over the feature space while preserving the original dataset distribution. Note that we stop the gradient from flowing into the model backbone to ensure that the bias is not learned again. In our experiments, we found that this achieves state-ofthe-art performance on many benchmarks. However, on some, we found that we could obtain a slight improvement in performance by Oversampling the distribution of the features used to train the multi-class prediction head. Oversampling, unlike when used to train the entire model, did not overfit in this setting because 1) the multi-class prediction head is a simple linear layer which is less likely to overfit than an overparameterized model like a neural net 2) the features learned by our
method do not take gradients from the oversampled approach, helping to ensure that the neural net can not overfit over the reptitive samples. Refer to Appendix B for further analysis. Given these results, we use Oversampling to train the multi-class prediction head throughout our experiments.
Cost Analysis Unlike prior work (Hong & Yang, 2021; Tartaglione et al., 2021), bias mimicking involves no extra hyper-parameters. The debiasing is automatic by definition. Moreover, unlike prior work (Wang et al., 2020b) where an that used each prediction head trained on every protected attribute (i.e. O(B)), bias mimicking uses only a single prediction head during inference (i.e. O(1)), discarding the attribute specific prediction heads at test time. Therefore, our method is simpler and more efficient during inference. Finally, note that the additional target labels dy do not result in longer epochs; we make one pass only over each sample x during an epoch and apply its contribution to the relevant BPy(s) accordingly.
4 EXPERIMENTS
We report our method performance on two benchmarks: a binary classification benchmark in Section 4.1 and a multi-class classification benchmark in Section 4.2. In addition to reporting our method results, we expand both benchmarks, which mainly focus on model-based methods, by including the basic sampling methods outlined in Section 3, namely Undersampling, Upweighting, and Oversampling. Then, we follow up our results with two main experiments that analyze our method’s behavior. The first experiment in Section 4.3 is a sensitivity analysis of our method to the mimicking condition. The second experiment in Section 4.4 analyzes the contribution of each version of the target labels dy to model performance.
4.1 BINARY CLASSIFICATION BENCHMARK
Datasets and MetricsWe compare methods using CelebA dataset (Liu et al., 2015) and UTKFace dataset (Zhang et al., 2017). Following prior work (Hong & Yang, 2021; Tartaglione et al., 2021), we train a binary classification model using CelebA where BlondHair is a target attribute and Gender is a bias attribute. Note that prior work (Hong & Yang, 2021; Tartaglione et al., 2021) used the HeavyMakeUp attribute in their CelebA benchmark. However, during our experiments, we found serious problems with the benchmark. Refer to Appendix C for model details. Therefore, we skip
this benchmark in our work. For UTKFace, we follow Hong & Yang (2021) and do the binary classification with Race/Age as the sensitive attribute and Gender as the target attribute. Methods are evaluated in terms of Unbiased Accuracy (Hong & Yang, 2021) which measures the accuracy on a test set balanced among each subgroup, and Bias-Conflict (Hong & Yang, 2021) which measures the accuracy on the minority subgroups. Refer to Appendix D for more details.
Baselines We use the baselines reported in prior work (Hong & Yang, 2021) benchmark. These methods either require additional model parameters or hyper-parameters. ”Bias-Contrastive and Bias-Balanced Learning” (BC + BB) (Hong & Yang, 2021) uses a contrastive learning framework to mitigate bias. ”Learning Not to Learn” (LNL) uses an additional prediction branch to minimize the mutual information between the model feature and the bias label. Domain-independent (DI) (Wang et al., 2020b) uses an additional prediction head for each sensitive subgroup. EnD (Tartaglione et al., 2021) uses a regularizer that disentangles the features of the same bias class samples. Furthermore, we expand the benchmark by reporting the performance of sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results in the binary classification benchmark in Table 1 indicates that our method (BM) maintains strong averaged predictive accuracy (UA) while significantly improving performance over the under-represented subgroups (BC) by > 3% when compared to prior work model-based methods. Moreover, our method’s strong performance does not require additional expensive hyper-parameter tuning or model parameters, unlike prior work methods. Therefore, our method is simpler and more efficient. Furthermore, note that our method performs the best among the sampling methods. More concretely, BM performs significantly better than Oversampling. This aligns with (Wang et al., 2020b) observation that overparameterized models like neural nets tend to overfit when trained on Oversampled distributions. Note that even though our method uses Oversampling to train part of the network, we do not see the same decline in performance. This is because, as discussed in Section 3.2, we only use the oversampled version of the dataset to train a simple linear layer over the learned debiased features. Undersampling demonstrates poor predictive performance on UTK-Face due to Undersampling a significant portion of the dataset, which is then reflected in the average performance. Surprisingly, however, the method performs quite well on CelebA. This is likely because the task (predicting hair color) is easy and does not require many samples. Thus, this emphasizes the method’s importance as a reasonable first step in case sufficient data is available. Overall, however, the average performance of our method is significantly better. Finally, note that while Upweight-
ing demonstrates competitive performance on CelebA, it lags behind significantly over Utk-Face. This is likely due to the method’s instability with respect to stochastic gradient descent (An et al., 2021). Thus, when comparing the method’s averaged performance to ours, we note that our method improves performance by over 1%.
4.2 MULTI-CLASS CLASSIFICATION BENCHAMRK
Datasets and Metrics We use the bias controlled CIFAR-S benchmark (Wang et al., 2020b). The benchmark synthetically introduces bias into the CIFAR-10 dataset (Krizhevsky, 2012) by converting a sub-sample of images from each target class to gray-scale. We use (Wang et al., 2020b) version where half of the classes are biased toward color and the other half is biased toward gray where the dominant sensitive group in each target represent 95% of the samples. Methods are evaluated using the accuracy on two versions of the test sets: color and gray as well as the mean of both sets. In addition to accuracy, methods are evaluated with the bias amplification metric (Wang et al., 2020b).
Baselines We use the baselines reported in prior work (Wang et al., 2020b) benchmark. These methods either require additional model parameters or hyper-parameters. Adversarial Learning (Adv) w/ uniform confusion (Hoffman et al., 2015) and reversal projection (Zhang et al., 2018), introduces an adversarial loss that seeks to randomize model’s feature representation of Y and B, Domain Discriminative training (Wang et al., 2020b) and Domain Independence (DI) (Wang et al., 2020b) both introduce a prediction head for each dataset subgroup. However, they use different methods to train and perform inference. Refer to (Wang et al., 2020b) for more information. Furthermore, we expand the benchmark by including the sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results Observe the results in Table 2. Note that our method (BM) is able to maintain competitive performance over prior work state of the art (DI) while requiring no additional hyperparameters or model parameters at inference time. Moreover, when considering DI poor performance on the Binary Classification benchmark in Table 1, our method averaged performance with the results from Table 2 is significantly better. More notably, it is the only sampling method that maintains both strong predictive performance and significantly reduces the bias. Upweighting weaker predictive performance here can be attributed to its instability with respect to stochastic gradient descent (An et al., 2021). While Undersampling reduces bias significantly compared to prior work, it lags significantly with its predictive performance.
4.3 HOW SENSITIVE IS THE MODEL PERFORMANCE TO THE MIMICKING CONDITION?
Bias Mimicking is effective at mitigating Bias. However, it is not clear how sensitive the model’s learned spurious predictive behavior is to the Bias mimicking condition (Equation 1). Therefore, in this section, we seek to answer this question. To that end, we test multiple scenarios where the Bias is mimicked by a percentage value x ∈ {0, 100} such that 0% corresponds to no modification to the distribution and 100% corresponds to complete Bias mimicking. Following this definition,
we run the experiment on the datasets and attributes introduced by the Binary Classification Benchmark in Section 4.1 and evaluate performance using the metrics introduced in Section 4.1, namely Bias Conflict (BC) and Unbiased Accuracy (UA). Observe the result in Figure 3. Note that as the percentage of Bias Mimicked decreases, the BC and UA decrease as expected. This is because P (Y |B) ̸= P (Y ) following proposition 1. The best performance is achieved when x% is indeed 100% and thus P (Y |B) = P (Y ). From this analysis, we conclude that the Bias Mimicking condition is critical for good performance.
4.4 HOW DOES EACH dy AFFECT MODEL PERFORMANCE?
Bias Mimicking takes in the original dataset distribution of target labels and produces a different version for each class y, denoted as dy , where class y samples are preserved, and the bias of y is mimcked in the other classes. By using every dy , we expose the model to the entire dataset. In this section, we investigate the effect of each dy on performance. To that end, we perform an experiment using the binary classification tasks outlined in Section 4.1. For each task, thus, we have two versions of the dataset target labels d1, d2. We compare three different models performances: model (1) trained on only (d1), model (2) trained on only (d2) and finally model (3) trained on both (d1, d2). The last version being the one used by our method. We use the Unbiased Accuracy Metric (UA). We also break down the metric into two versions: UA1 where accuracy is averaged over class 1 subgroups, and UA2 where accuracy is averaged over class 2 subgroups. Note that, overall, the model trained on (d1) performs better at predicting y1 but worse at predicting y2 as outlined by results in Table 3. We note the same trend with the model trained on (d2) but in reverse. This disparity in performance harms the Unbiased Accuracy. However, a model trained on (d1, d2) balances both accuracies and achieves the best total Unbiased Accuracy (UA). These results emphasize the importance of each dy for good performance.
5 CONCLUSION
We first recognized that simple and hyper-parameter-free methods for bias mitigation like Undersampling and Upweighting were missing from recent benchmarks. This observation was especially relevant considering that state-of-the-art solutions require either expensive hyper-parameter tuning or architectural change. Therefore, we benchmarked these methods and concluded that some are surprisingly effective on many datasets. However, on some others, their performance significantly lagged behind model-based methods. Motivated by this observation, we introduced a novel sampling method: Bias Mimicking. The method retained the simplicity of sampling methods while improving over state-of-the-art model-based methods. Furthermore, we extensively analyzed the behavior of Bias Mimicking, which emphasized the importance of our design.
Future Work We demonstrate that dataset re-sampling methods are simple and effective tools in mitigating bias. However, we recognize that the explored bias scenarios in prior and our work are still limited. For example, current studies only consider mutually exclusive sensitive groups. Thus, a sample can belong to only one sensitive group at a time. How does relaxing this assumption, i.e. intersectionality, impact bias? Finally, the dataset re-sampling methods presented in this work are effective at mitigating bias when enough samples from all dataset subgroups are available. However, it is unclear how these methods could handle bias scenarios when one class is completely biased by one sensitive group. These are all questions that we hope future work could address.
Code of Ethics Statement Our work addresses a critical problem within the fairness literature: spurious predictive behavior by Deep Learning models. We measure this behavior by calculating the dataset’s subgroups’ predictive performance (accuracy). While this metric aligns with our goal of measuring and preventing spurious behavior, we emphasize that the metric is not exhaustive of other fairness concerns. We refer the reader to (Kleinberg et al., 2017) for a broader discussion of fairness metrics. Furthermore, while our proposed method aims to learn robust representations for underrepresented subgroups within a given dataset, we acknowledge that such representations could be misused in downstream applications that could raise ethical concerns (e.g. surveillance). Furthermore, two of the datasets used in our experiments involve facial attribute recognition tasks (e.g. hair color). Models trained on these datasets could be misused, yet they remain standard benchmarks for evaluating spurious correlations. We encourage future researchers and practitioners to use this technology with care and caution.
Reproducibility Statement We provide the source code for this work in the supplementary. The training and hyper-parameters to reproduce the results are outlined in Appendix D. Moreover, The results we report in this paper were averaged over 3 runs with different seeds, and standard deviations were reported to ensure statistical significance. Furthermore, all experiments were done on datasets that are publicly available online.
A PROPOSITIONS AND PROOFS
The central component of our proposed method is the insight that if the the bias toward sensitive group b was proportionally equal among every y ∈ Y , then P (Y = y|B = b) = P (Y = y). We provide a proof for this proposition below:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Proof. First, given the law of total probability: P (B = b) = ∑ y′∈Y P (B = b|Y = y′)P (Y = y′)
Given our assumption of bias mimicking, we can write ∀y ∈ Y : P (B = b) = P (B = b|Y = y) ∑ y′∈Y P (Y = y′) = P (B = b|Y = y) (2)
From here, using Bayesian probability and the result from (2), we can write, ∀y ∈ Y :
P (Y = y|B = b) = P (B = b|Y = y)P (Y = y) P (B = b) = P (B = b)P (Y = y) P (B = b) = P (Y = y)
B SAMPLING METHODS IMPACT ON MULTI-CLASS CLASSIFICATION HEAD
Bias Mimicking produces a binary version dy of the labels for each class y. Each dy preserves class y samples while undersampling each y′ such that the bias within y′ mimics y. A debiased feature representation is then learned by training a binary classifier for each dy . When the training is done, it is challenging to use the scores from each binary predictor for inference. This is because each predictor is trained on a different distribution of the data, so the predictors are uncalibrated with respect to each other. Therefore, to perform inference, we train a multi-class prediction head on top of the learned feature representations using the original dataset distribution, where we prevent the gradients from flowing into the feature space. Observe that this approach denoted by BM + Vanilla in Table 4 and Table 5 is effective at mitigating the bias when compared to vanilla model results. However, we note that we obtained a slight performance improvement when we used Oversampling mostly on the CelebA dataset in Table 4 and on the Cifar-s dataset in Table 5. Therefore, we use Oversampling to train the multi-class prediction head in our implementation.
Aside from Oversampling, note that other sampling methods, namely Undersampling and Upweighting, could not demonstrate the same improvement in performance. Both were comparable to Oversampling on the Binary Benchmark in Table 4. However, both lagged significantly behind the multiclass benchmark in Table 5.
C HEAVY MAKEUP BENCHMARK
Prior work Hong & Yang (2021) uses the Heavy Makeup binary attribute prediction task from CelebA [20] as a benchmark for bias mitigation, where Gender is the sensitive attribute. In this experiment, Heavy Makeup’s attribute is biased toward the sensitive group: Female. In our experiments, we found that this benchmark contains significant noise. We believe this noise stems from the fact that Heavy Makeup is subjective metric. Moreover, It is influenced by cultural elements, lighting conditions, and camera pose. Thus, we expect a fair amount of label noise, which we verify via a qualitative and quantitative analysis below.
Qualtiative Analysis: We sample random 5 images from the following subgroups: Female-Heavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup (Fig 4). It is clear from the Figure that there is no firm agreement about the definition of Heavy Makeup.
Quantitative Analysis: We sample 100 random images from the following subgroups: FemaleHeavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup. We calculated the percentage of noisy images for each subgroup, i.e. images that are not clear whether they correctly belong to their subgroup. Observe the results in Table 6. Note that there is a significant amount of noise in each subgroup. Furthermore, the noise is amplified for the subgroups that involve the sensitive group: Female. The noise is further amplified when the test set used in Hong & Yang (2021) is examined. The test set for Male-Heavy Make up (an under-represented subgroup) only contains 9 samples. we could not visually determine whether 4 out of these 9 images fall under Heavy MakeUp. Out of the 5 left images, 3 are images of the same person from different angles. Therefore, given the noise in the training set, the small size of the under-represented group in the test set, and its noise, we conclude that results from this benchmark will be pretty noisy. Therefore, we choose to skip it in our experiments.
D MODEL AND HYPER-PARAMETERS DETAILS
We test our method (Bias Mimicking) on two benchmarks. The first benchmark is Binary Classification Benchmark that includes two datasets: CelebA (Liu et al., 2015), UTK-Face (Zhang et al., 2017) as outlined in Section 4.1. We follow the same training procedure as prior work (Hong & Yang, 2021). For both datasets, we train a Resnet-18 (He et al., 2016) model. We train the model for a total of 10 epochs on CelebA and 20 epochs on UTK-Face. For both datasets, we use a learning rate of 1e − 3 with ADAM (Kingma & Ba, 2014) optimizer. We decay the learning rate following an exponential schedule at epochs 3 and 6 for CelebA and 7 and 14 for UTK-Face with γ = 0.1 for both datasets. During training, we augment the input images through a horizental flip.
The second benchmark is a Multi-Class Classification benchmark that makes use of the CIFAR-S dataset (Wang et al., 2020b). Following prior work (Wang et al., 2020b), we train a Resnet18 model for a total of 200 epochs. We train the model with Stochastic Gradient Descent (SGD) with learning rate 0.1, momentum 0.9. We use an exponential decay scheduler at epochs 50,100, and 200 with γ = 0.1. We augment the input images using a horizontal flip and a random crop.
Our method, as outlined in Section 3.2 trains a multi-class prediction head on top of the debiased feature space where the gradients are stopped from flowing back into the main network. At each epoch, we train the layer with the same hyper-parameters and total number of epochs outlined for the main model and then reset the layer at each epoch. | 1. What is the focus and contribution of the paper regarding data resampling and upweighting?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of training stability and model capacity?
3. Do you have any concerns or questions about the experimental results and ablation studies?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or requests for additional information or simulations to support the paper's claims? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
Data resampling and upweighting are widely adopted in bias mitigation task, but it suffers from training stability or model capacity. To handle this issue, the authors propose bias mimicking for class-conditioned sampling method. They first create a different version of class target labels, where the bias of y is mimicked (under-sampled) that of other classes. When mimicking, they train a binary classifier to predict y (y or not y), and this mimicking is conducted for all classes, which prevents dropping a large amount of data for undersampling. The proposed framework is evaluated on binary-class and multi-class benchmarks.
Strengths And Weaknesses
Strength
They propose a bias mimicking that conduct undersampling in an efficient way that minimizes sample drop.
The results show competitive performance compared to existing approaches.
Weakness
Average accuracy is missing. I encourage the authors to include the average accuracy for all tables (in addition to UA and BC) for all methods for a comprehensive view.
In cost analysis, why (Wang et al.) is O(B) and BM is O(1)? In my understanding, (Wang et al.) employs multi-way classifiers, but the cost of forwarding classification layer is negligible compared to feature extractor, so the overall cost will almost be the same. In a similar vein, I don’t think ‘additional model params’ in main tables is useful. Also, what does the ‘additional hyperparameters’ mean for each method?
During training, the proposed method requires two-stage training, 1) bias mimicking and 2) upsampling, but such limitation is not addressed.
In Ch4.3, comparing with OS/UW/US will provide more accurate analysis. Could you provide the results by changing the percentage value of sampling or weighting of OS/UW/WS, as in Ch 4.3?
Experimental results are not strong, especially compared to the sampling strategies.
Ablative study is incomplete.
How is the results when feature extractor is trained with multiple binary classifier without bias mimicking, and then OS/UW/WS? (In other words, only bias mimicking is removed in the full framework.)
What will be the results if stop gradient is not used to train final model?
Clarity, Quality, Novelty And Reproducibility
This paper proposes a clear idea based on effective resampling, but the writing has some typos and inconsistent writing conventions and margins. Idea is a simple yet novel, but its effectiveness needs to be verified thoroughly.
The paper provide source codes for reproducibility. |
ICLR | Title
Bias Mimicking: A Simple Sampling Approach for Bias Mitigation
Abstract
Prior work has shown that Visual Recognition datasets frequently under-represent sensitive groups (e.g. Female) within a category (e.g. Programmers). This dataset bias can lead to models that learn spurious correlations between class labels and sensitive attributes such as age, gender, or race. Most of the recent methods that address this problem require significant architectural changes or expensive hyper-parameter tuning. Alternatively, data re-sampling baselines from the class imbalance literature (e.g. Undersampling, Upweighting), which can often be implemented in a single line of code and often have no hyperparameters, offer a cheaper and more efficient solution. However, we found that some of these baselines were missing from recent bias mitigation benchmarks. In this paper, we show that these simple methods are strikingly competitive with state-of-the-art bias mitigation methods on many datasets. Furthermore, we improve these methods by introducing a new class conditioned sampling method: Bias Mimicking. In cases where the baseline dataset re-sampling methods do not perform well, Bias Mimicking effectively bridges the performance gap and improves the total averaged accuracy of under-represented subgroups by over 3% compared to prior work.
1 INTRODUCTION
Spurious predictive correlations have been frequently documented within the Deep Learning literature (Wang et al., 2020a; Zhao et al., 2021). These correlations can arise when most samples in class y (e.g. blonde hair) belong to a sensitive group b (e.g. female). Thus, the model might learn to use the signal from b to predict y. Mitigating this spurious correlation (Bias) involves decorrelating the model’s predictions of y from the dataset-sensitive group b. Previous research efforts have primarily focused on model-based solutions. These efforts can be mainly categorized into two directions 1) methods that require significantly more model parameters during inference (Wang et al., 2020b), which harms model scalability as we increase the number of sensitive groups/target-classes. 2) methods that introduce additional loss functions and require expensive hyper-parameter tuning (Hong & Yang, 2021; Kim et al., 2019a; Ryu et al., 2017; Tartaglione et al., 2021).
Dataset re-sampling methods, popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018), present a simpler and cheaper alternative. Moreover, as illustrated in Figure 1(a), they can be easily extended to Bias Mitigation by considering the imbalance within the dataset subgroups rather than classes. Most common of these methods are Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Oversampling (Wang et al., 2020b). They mitigate class imbalance by altering the dataset distribution through dropping/repeating samples, respectively. Another similar solution is Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) which levels each sample contribution to the loss function by appropriately weightings its loss value. However, these methods suffer from significant shortcomings. For example, Undersampling drops a significant portion of the dataset, which could harm models’ predictive capacity. Moreover, Upweighting can be unstable when used with stochastic gradient descent (An et al., 2021). Finally, models trained with Oversampling, as shown by Wang et al. (2020b), are likely to overfit due to being exposed to repetitive sample copies.
To address these problems, we propose Bias Mimicking (BM): a class-conditioned sampling method that mitigates the shortcomings of prior work. As shown in Figure 1(b), for each y ∈ Y , BM maintains a different version of the dataset target labels dy . Each dy preserves y samples while
Undersampling y′ ̸= y such that the bias within class y′ mimics that of y. Consequently, for each dy , Y is statistically independent from B, i.e. Pdy (Y = y|B = b) = Pdy (Y = y). A naive way of using each dy is to dedicate a different multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, we binarize each dy . Then, we train a different one-vs-all binary classifier on each dy . BM addresses the shortcomings of prior work’s sampling methods. For example, using every dy to train the model exposes it to the entire dataset, whereas Undersampling can discard significant numbers of samples. Our approach also does not use weights to scale the loss function. Thus our method should avoid stability issues using stochastic gradient descent suffered by Upweighting. Finally, since each binary predictor is not exposed to repetitive samples, our model is less prone to overfitting.
In addition to proposing Bias Mimicking, another contribution of our work is providing an extensive analysis on sampling methods for bias mitigation tasks. We found many sampling-based methods were notably missing in the comparisons used in prior work (Tartaglione et al., 2021; Wang et al., 2020b; Hong & Yang, 2021). Despite their shortcomings, we show that Undersampling and Upweighting are surprisingly competitive on many bias mitigation benchmarks. Therefore, this emphasizes these methods’ importance as an inexpensive first choice for mitigating Bias. However, for cases where these methods are not as effective, Bias Mimicking effectively bridges the performance gap and improves over prior work model-based methods. Finally, we thoroughly analyze our approach’s behavior through two experiments. First, it is unclear how sensitive our method is to the mimicking condition. Therefore, in Section 4.3, we simulate various scenarios where the Bias is not perfectly mimicked and note model performance. Second, we verify the importance of each dy to the method predictive performance in Section 4.4. Both experiments showcase the importance of our design in mitigating Bias.
Our contributions can be summarized as:
• We show that simple resampling methods are be competitive on many benchmarks when compared to expensive model-based state-of-the-art approaches. • We introduce a novel resampling method: Bias Mimicking that improves the average accuracy over under-represented subgroups by 3% over multiple datasets. • We conduct an extensive empirical analysis of Bias Mimicking that details the method’s sensitivity to the Mimicking condition and uncovers various insights about its behavior.
2 RELATED WORK
Documenting Spurious Correlations: Bias in Machine Learning can manifest in many ways. Examples include class imbalance Japkowicz & Stephen (2002), historical human biases Suresh & Guttag (2019), evaluation bias et al. (2018), and more. For a full review, we refer the reader to Mehrabi et al. (2021). In our work, we are interested in model bias that arises from spurious correlations. A spurious correlation results from under-representing a certain group of samples (e.g. samples with color red) within a certain class (e.g. planes). This leads the model to learn the false relationship between the class and the over-represented group. Prior work has documented several occurrences of this bias. For example, Singh et al. (2020); Hendrycks et al. (2021); Xiao et al. (2020); Li et al. (2020) showed that state-of-the-art object recognition models are biased toward backgrounds or textures associated with the object class. Agrawal et al. (2018); Clark et al. (2019) showed similar spurious correlations in VQA. Zhao et al. (2021) noted concerning correlations between captions and attributes like skin color in Image-Captioning datasets. Beyond uncovering these correlations within datasets, prior work like Wang et al. (2020b); Hong & Yang (2021) introduced synthetic bias datasets where they systematically assessed bias effect on model performance.
Model based solutions: In response to documentation of dataset spurious correlations, several model focused methods have been proposed (Ryu et al., 2017; Kim et al., 2019a; Wang et al., 2020b; Hong & Yang, 2021; Tartaglione et al., 2021). For example, Tartaglione et al. (2021) presents a metric learning-based method where model feature space is regularized against learning harmful correlations. Wang et al. (2020b) surveys several existing methods such as adversarial training, that randomizes the relationship between target classes and sensitive groups in the feature space. They also present a new method, domain-independence, where different prediction heads are allocated for each sensitive group. Most recently, Hong & Yang (2021) extended contrastive learning frameworks on self-supervised learning (Chen et al., 2020; He et al., 2020; Khosla et al., 2020) to mitigate bias. Our work complements these efforts by introducing a hyper-parameter-free re-sampling algorithm that improves over state-of-the-art performers.
Dataset based solutions In addition to model-based approaches, we can mitigate spurious correlations by fixing the training dataset distribution. Examples include Oversampling minority classes and Undersampling majority ones, which are popular within the class imbalance literature (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018). However, as we note in our introduction, some of these methods have been missing in recent visual Bias Mitigation Benchmarks (Wang et al., 2020b; Tartaglione et al., 2021; Hong & Yang, 2021). Thus, we review these methods and describe their shortcomings in Section 3.1. Alternatively, other efforts attempt to fix the dataset distribution by introducing completely new samples. Examples include (Kärkkäinen & Joo, 2021) where they introduce a new dataset for face recognition that is balanced among several race groups, and Ramaswamy et al. (2020) where they used GANs to generate training data that balance the sizes of dataset subgroups. While a dataset-based approach, our work differs from these efforts as it does not generate or introduce new samples. Finally, also related to our work are sampling methods like REPAIR (Li & Vasconcelos, 2019) where a function is learned to prioritize specific samples and, thus, learn more robust representations. However, unlike our method, REPAIR does not make use of sensitive group labels and thus is not able to target specific spurious correlations.
3 SAMPLING FOR BIAS MITIGATION
In bias mitigation the goal is to remove the effect of spurious correlations to prevent a model from correlating its predictions with sensitive groups of samples. More formally, assume we have a dataset of image/target/sensitive-group triplets (X,Y,B). Define gy,b = {(xi, yi, bi) s.t yi = y, bi = b}, i.e. the subgroup of samples that share the target class y and sensitive group b. Spurious correlations occur when the samples of a target class y (e.g. blonde hair) are over-represented by one sensitive group b (e.g. female) rather than distributed equally among the dataset sensitive groups. In other words, assuming PD(X,Y,B) is the distribution of the training data, then PD(B = b, Y = y) >> 1 |B| where |B| denotes the number of sensitive groups. Consequently, the model may use the signal from B to predict Y . For example, if most blonde hair samples were female, the model might learn to predict blonde hair every time it sees a female sample. Thus, the goal of this task is to train the model such that P (Ŷ |X,B) = P (Ŷ |X) where Ŷ denotes model predictions.
Our work addresses methods that fix spurious correlations by ensuring statistical Independence between Y and B on the dataset level, i.e. PD(Y |B) = P (Y ). As we note in our introduction, some of the solutions that fall under this approach are missing from prior work benchmarks. Therefore, We briefly review the missing methods in Section 3.1. Then, we introduce a new sampling method: Bias Mimicking which addresses prior work sampling methods shortcomings in Section 3.2.
3.1 SIMPLE SAMPLING METHODS
As discussed in the introduction, the class imbalance literature is rich with dataset re-sampling solutions (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020; et al., 2018; Chawla et al., 2002). These solutions address class imbalance by balancing the distribution of classes. Thus, they can be extended to Bias Mitigation by balancing subgroups instead of classes. Prior work in Visual Bias Mitigation has explored one of these solutions: Oversampling (Wang et al., 2020b). This approach mitigates dataset bias by replicating the samples of the underrepresented subgroups until all subgroups are balanced. However, Wang et al. (2020b) has demonstrated that Oversampling is not effective at mitigating bias, likely because the model sees repetitive sample copies, thus resulting in overfitting. In addition to Oversampling, Undersampling (Japkowicz & Stephen, 2002; Buda et al., 2018; Sagawa et al., 2020) and Upweighting (Shimodaira, 2000; Byrd & Lipton, 2019) are other popular methods. Both methods, however, have not been benchmarked in recent Visual Bias Mitigation work (Tartaglione et al., 2021; Hong & Yang, 2021). We review these solutions below.
Undersampling drops samples from the majority classes until all classes are balanced. We can extend this solution to Bias Mitigation by dropping samples to balance dataset subgroups. More concretely, we subsample every subgroup in the dataset to match the size of the smallest subgroup, i.e. min
y,b |gy,b|. However, a critical shortcoming of Undersampling is that it drops a significant portion of the dataset and thus could harm the model’s predictive capacity. This might explain its absence from recent bias mitigation benchmarks. We address this shortcoming in our proposed method, Bias Mimicking, in Section 3.2.
Upweighting Upweighting levels the contribution different samples to the loss function by multiplying its loss value by the inverse of the sample’s class frequency. We can extend this process to Bias Mitigation by simply considering subgroups instead of classes. More concretely, assume model weights w, sample x, class y, and subgroup gy,b where x ∈ gy,b, then the model optimizes:
L = Ex,y,g [ 1 pgy,b l(x, y;w) ]
where pgy,b = |gy,b|∑ y,b |gy,b|
computed over the training dataset. A key shortcoming of Upweighting is its instability when used with stochastic gradient descent (An et al., 2021). Indeed, we demonstrate this problem in our experiments where Upweighting does not work well on some datasets.
3.2 BIAS MIMICKING
The goal in our method is to decorrelate a model’s predictions Ŷ from sensitive attributes B. Our approach is inspired by Sampling methods, which enforce this independence on the dataset level (i.e.PD(Y |B) = PD(Y )). However, simple sampling methods like Oversampling, Undersampling, and Upweighting suffer from critical shortcomings as outlined in Section 3.1. We address these shortcomings in our proposed sampling algorithm: Bias Mimicking below.
Algorithm The key principle of our algorithm is the observation that if bias toward sensitive group b was proportionally equal among all the target classes, then for every y ∈ Y , P (Y = y|B = b) = P (Y = y), i.e. y is statically independent from b. More concretely:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Refer to Appendix A for proof. Note that ensuring this proposition holds for every b ∈ B implies that P (Y |B) = P (Y ), in other words Y is independant from B which in turns means that the resulting model’s predictions will not be strongly correlated with B. Refer to Section 4.3 for an empirical sensitivity analysis of this result. Note how under-sampling is a special case of this result where indeed Y is independent of B.
We use this result to motivate a novel way of Undersampling the input distribution that ensures the model is exposed to every sample in the dataset while preventing a spurious correlation. To that end, for every class y, we create a different version of the dataset target labels. Denote each version as dy . Each dy preserves its respective class y samples while Undersampling every y′ ̸= y such that
Pdy (B = b|Y = y) = Pdy (B = b|Y = y′) ∀b ∈ B (1)
As a result, each dy ensures the independence of Y from B. A naive way of using each dy would be to dedicate a multi-class prediction head for each. However, this could present scalability issues as the number of classes/sensitive-groups increases. Alternatively, We binarize each dy and then use it train a one-vs-all binary classifier BPy for each y. Each head is trained on image-target pairs from its corresponding distribution dy as Figure 2 demonstrates. Consequently, since each dy preserves y samples, the model backbone sees the entire dataset through the different signals backpropagating from each BPy . Moreover, since the same bias is mimicked for every class in dy , each incoming signal from BPy does not backpropogate spurious correlations, thus, Y correlation with B should be minimized within the model learned feature representation.
Using the binary classifiers during inference is challenging since each was trained on different distributions and are, therefore, uncalibrated compared to each other. However, we know that Bias Mimicking minimizes the correlation between Y and B within the feature space. Thus, we exploit this fact by training a multi-class prediction head over the feature space while preserving the original dataset distribution. Note that we stop the gradient from flowing into the model backbone to ensure that the bias is not learned again. In our experiments, we found that this achieves state-ofthe-art performance on many benchmarks. However, on some, we found that we could obtain a slight improvement in performance by Oversampling the distribution of the features used to train the multi-class prediction head. Oversampling, unlike when used to train the entire model, did not overfit in this setting because 1) the multi-class prediction head is a simple linear layer which is less likely to overfit than an overparameterized model like a neural net 2) the features learned by our
method do not take gradients from the oversampled approach, helping to ensure that the neural net can not overfit over the reptitive samples. Refer to Appendix B for further analysis. Given these results, we use Oversampling to train the multi-class prediction head throughout our experiments.
Cost Analysis Unlike prior work (Hong & Yang, 2021; Tartaglione et al., 2021), bias mimicking involves no extra hyper-parameters. The debiasing is automatic by definition. Moreover, unlike prior work (Wang et al., 2020b) where an that used each prediction head trained on every protected attribute (i.e. O(B)), bias mimicking uses only a single prediction head during inference (i.e. O(1)), discarding the attribute specific prediction heads at test time. Therefore, our method is simpler and more efficient during inference. Finally, note that the additional target labels dy do not result in longer epochs; we make one pass only over each sample x during an epoch and apply its contribution to the relevant BPy(s) accordingly.
4 EXPERIMENTS
We report our method performance on two benchmarks: a binary classification benchmark in Section 4.1 and a multi-class classification benchmark in Section 4.2. In addition to reporting our method results, we expand both benchmarks, which mainly focus on model-based methods, by including the basic sampling methods outlined in Section 3, namely Undersampling, Upweighting, and Oversampling. Then, we follow up our results with two main experiments that analyze our method’s behavior. The first experiment in Section 4.3 is a sensitivity analysis of our method to the mimicking condition. The second experiment in Section 4.4 analyzes the contribution of each version of the target labels dy to model performance.
4.1 BINARY CLASSIFICATION BENCHMARK
Datasets and MetricsWe compare methods using CelebA dataset (Liu et al., 2015) and UTKFace dataset (Zhang et al., 2017). Following prior work (Hong & Yang, 2021; Tartaglione et al., 2021), we train a binary classification model using CelebA where BlondHair is a target attribute and Gender is a bias attribute. Note that prior work (Hong & Yang, 2021; Tartaglione et al., 2021) used the HeavyMakeUp attribute in their CelebA benchmark. However, during our experiments, we found serious problems with the benchmark. Refer to Appendix C for model details. Therefore, we skip
this benchmark in our work. For UTKFace, we follow Hong & Yang (2021) and do the binary classification with Race/Age as the sensitive attribute and Gender as the target attribute. Methods are evaluated in terms of Unbiased Accuracy (Hong & Yang, 2021) which measures the accuracy on a test set balanced among each subgroup, and Bias-Conflict (Hong & Yang, 2021) which measures the accuracy on the minority subgroups. Refer to Appendix D for more details.
Baselines We use the baselines reported in prior work (Hong & Yang, 2021) benchmark. These methods either require additional model parameters or hyper-parameters. ”Bias-Contrastive and Bias-Balanced Learning” (BC + BB) (Hong & Yang, 2021) uses a contrastive learning framework to mitigate bias. ”Learning Not to Learn” (LNL) uses an additional prediction branch to minimize the mutual information between the model feature and the bias label. Domain-independent (DI) (Wang et al., 2020b) uses an additional prediction head for each sensitive subgroup. EnD (Tartaglione et al., 2021) uses a regularizer that disentangles the features of the same bias class samples. Furthermore, we expand the benchmark by reporting the performance of sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results in the binary classification benchmark in Table 1 indicates that our method (BM) maintains strong averaged predictive accuracy (UA) while significantly improving performance over the under-represented subgroups (BC) by > 3% when compared to prior work model-based methods. Moreover, our method’s strong performance does not require additional expensive hyper-parameter tuning or model parameters, unlike prior work methods. Therefore, our method is simpler and more efficient. Furthermore, note that our method performs the best among the sampling methods. More concretely, BM performs significantly better than Oversampling. This aligns with (Wang et al., 2020b) observation that overparameterized models like neural nets tend to overfit when trained on Oversampled distributions. Note that even though our method uses Oversampling to train part of the network, we do not see the same decline in performance. This is because, as discussed in Section 3.2, we only use the oversampled version of the dataset to train a simple linear layer over the learned debiased features. Undersampling demonstrates poor predictive performance on UTK-Face due to Undersampling a significant portion of the dataset, which is then reflected in the average performance. Surprisingly, however, the method performs quite well on CelebA. This is likely because the task (predicting hair color) is easy and does not require many samples. Thus, this emphasizes the method’s importance as a reasonable first step in case sufficient data is available. Overall, however, the average performance of our method is significantly better. Finally, note that while Upweight-
ing demonstrates competitive performance on CelebA, it lags behind significantly over Utk-Face. This is likely due to the method’s instability with respect to stochastic gradient descent (An et al., 2021). Thus, when comparing the method’s averaged performance to ours, we note that our method improves performance by over 1%.
4.2 MULTI-CLASS CLASSIFICATION BENCHAMRK
Datasets and Metrics We use the bias controlled CIFAR-S benchmark (Wang et al., 2020b). The benchmark synthetically introduces bias into the CIFAR-10 dataset (Krizhevsky, 2012) by converting a sub-sample of images from each target class to gray-scale. We use (Wang et al., 2020b) version where half of the classes are biased toward color and the other half is biased toward gray where the dominant sensitive group in each target represent 95% of the samples. Methods are evaluated using the accuracy on two versions of the test sets: color and gray as well as the mean of both sets. In addition to accuracy, methods are evaluated with the bias amplification metric (Wang et al., 2020b).
Baselines We use the baselines reported in prior work (Wang et al., 2020b) benchmark. These methods either require additional model parameters or hyper-parameters. Adversarial Learning (Adv) w/ uniform confusion (Hoffman et al., 2015) and reversal projection (Zhang et al., 2018), introduces an adversarial loss that seeks to randomize model’s feature representation of Y and B, Domain Discriminative training (Wang et al., 2020b) and Domain Independence (DI) (Wang et al., 2020b) both introduce a prediction head for each dataset subgroup. However, they use different methods to train and perform inference. Refer to (Wang et al., 2020b) for more information. Furthermore, we expand the benchmark by including the sampling methods outlined in Section 3. All reported methods are compared to a ”vanilla” model trained on the original dataset distribution with a Cross-Entropy loss.
Results Observe the results in Table 2. Note that our method (BM) is able to maintain competitive performance over prior work state of the art (DI) while requiring no additional hyperparameters or model parameters at inference time. Moreover, when considering DI poor performance on the Binary Classification benchmark in Table 1, our method averaged performance with the results from Table 2 is significantly better. More notably, it is the only sampling method that maintains both strong predictive performance and significantly reduces the bias. Upweighting weaker predictive performance here can be attributed to its instability with respect to stochastic gradient descent (An et al., 2021). While Undersampling reduces bias significantly compared to prior work, it lags significantly with its predictive performance.
4.3 HOW SENSITIVE IS THE MODEL PERFORMANCE TO THE MIMICKING CONDITION?
Bias Mimicking is effective at mitigating Bias. However, it is not clear how sensitive the model’s learned spurious predictive behavior is to the Bias mimicking condition (Equation 1). Therefore, in this section, we seek to answer this question. To that end, we test multiple scenarios where the Bias is mimicked by a percentage value x ∈ {0, 100} such that 0% corresponds to no modification to the distribution and 100% corresponds to complete Bias mimicking. Following this definition,
we run the experiment on the datasets and attributes introduced by the Binary Classification Benchmark in Section 4.1 and evaluate performance using the metrics introduced in Section 4.1, namely Bias Conflict (BC) and Unbiased Accuracy (UA). Observe the result in Figure 3. Note that as the percentage of Bias Mimicked decreases, the BC and UA decrease as expected. This is because P (Y |B) ̸= P (Y ) following proposition 1. The best performance is achieved when x% is indeed 100% and thus P (Y |B) = P (Y ). From this analysis, we conclude that the Bias Mimicking condition is critical for good performance.
4.4 HOW DOES EACH dy AFFECT MODEL PERFORMANCE?
Bias Mimicking takes in the original dataset distribution of target labels and produces a different version for each class y, denoted as dy , where class y samples are preserved, and the bias of y is mimcked in the other classes. By using every dy , we expose the model to the entire dataset. In this section, we investigate the effect of each dy on performance. To that end, we perform an experiment using the binary classification tasks outlined in Section 4.1. For each task, thus, we have two versions of the dataset target labels d1, d2. We compare three different models performances: model (1) trained on only (d1), model (2) trained on only (d2) and finally model (3) trained on both (d1, d2). The last version being the one used by our method. We use the Unbiased Accuracy Metric (UA). We also break down the metric into two versions: UA1 where accuracy is averaged over class 1 subgroups, and UA2 where accuracy is averaged over class 2 subgroups. Note that, overall, the model trained on (d1) performs better at predicting y1 but worse at predicting y2 as outlined by results in Table 3. We note the same trend with the model trained on (d2) but in reverse. This disparity in performance harms the Unbiased Accuracy. However, a model trained on (d1, d2) balances both accuracies and achieves the best total Unbiased Accuracy (UA). These results emphasize the importance of each dy for good performance.
5 CONCLUSION
We first recognized that simple and hyper-parameter-free methods for bias mitigation like Undersampling and Upweighting were missing from recent benchmarks. This observation was especially relevant considering that state-of-the-art solutions require either expensive hyper-parameter tuning or architectural change. Therefore, we benchmarked these methods and concluded that some are surprisingly effective on many datasets. However, on some others, their performance significantly lagged behind model-based methods. Motivated by this observation, we introduced a novel sampling method: Bias Mimicking. The method retained the simplicity of sampling methods while improving over state-of-the-art model-based methods. Furthermore, we extensively analyzed the behavior of Bias Mimicking, which emphasized the importance of our design.
Future Work We demonstrate that dataset re-sampling methods are simple and effective tools in mitigating bias. However, we recognize that the explored bias scenarios in prior and our work are still limited. For example, current studies only consider mutually exclusive sensitive groups. Thus, a sample can belong to only one sensitive group at a time. How does relaxing this assumption, i.e. intersectionality, impact bias? Finally, the dataset re-sampling methods presented in this work are effective at mitigating bias when enough samples from all dataset subgroups are available. However, it is unclear how these methods could handle bias scenarios when one class is completely biased by one sensitive group. These are all questions that we hope future work could address.
Code of Ethics Statement Our work addresses a critical problem within the fairness literature: spurious predictive behavior by Deep Learning models. We measure this behavior by calculating the dataset’s subgroups’ predictive performance (accuracy). While this metric aligns with our goal of measuring and preventing spurious behavior, we emphasize that the metric is not exhaustive of other fairness concerns. We refer the reader to (Kleinberg et al., 2017) for a broader discussion of fairness metrics. Furthermore, while our proposed method aims to learn robust representations for underrepresented subgroups within a given dataset, we acknowledge that such representations could be misused in downstream applications that could raise ethical concerns (e.g. surveillance). Furthermore, two of the datasets used in our experiments involve facial attribute recognition tasks (e.g. hair color). Models trained on these datasets could be misused, yet they remain standard benchmarks for evaluating spurious correlations. We encourage future researchers and practitioners to use this technology with care and caution.
Reproducibility Statement We provide the source code for this work in the supplementary. The training and hyper-parameters to reproduce the results are outlined in Appendix D. Moreover, The results we report in this paper were averaged over 3 runs with different seeds, and standard deviations were reported to ensure statistical significance. Furthermore, all experiments were done on datasets that are publicly available online.
A PROPOSITIONS AND PROOFS
The central component of our proposed method is the insight that if the the bias toward sensitive group b was proportionally equal among every y ∈ Y , then P (Y = y|B = b) = P (Y = y). We provide a proof for this proposition below:
Proposition 1. Assume target labels set Y = {0, 1, 2, ..., C} and sensitive group b ∈ B, if P (B = b|Y = i) = P (B = b|Y = j) ∀i, j ∈ {0, ..., C}, then P (Y = y|B = b) = P (Y = y) ∀y ∈ Y.
Proof. First, given the law of total probability: P (B = b) = ∑ y′∈Y P (B = b|Y = y′)P (Y = y′)
Given our assumption of bias mimicking, we can write ∀y ∈ Y : P (B = b) = P (B = b|Y = y) ∑ y′∈Y P (Y = y′) = P (B = b|Y = y) (2)
From here, using Bayesian probability and the result from (2), we can write, ∀y ∈ Y :
P (Y = y|B = b) = P (B = b|Y = y)P (Y = y) P (B = b) = P (B = b)P (Y = y) P (B = b) = P (Y = y)
B SAMPLING METHODS IMPACT ON MULTI-CLASS CLASSIFICATION HEAD
Bias Mimicking produces a binary version dy of the labels for each class y. Each dy preserves class y samples while undersampling each y′ such that the bias within y′ mimics y. A debiased feature representation is then learned by training a binary classifier for each dy . When the training is done, it is challenging to use the scores from each binary predictor for inference. This is because each predictor is trained on a different distribution of the data, so the predictors are uncalibrated with respect to each other. Therefore, to perform inference, we train a multi-class prediction head on top of the learned feature representations using the original dataset distribution, where we prevent the gradients from flowing into the feature space. Observe that this approach denoted by BM + Vanilla in Table 4 and Table 5 is effective at mitigating the bias when compared to vanilla model results. However, we note that we obtained a slight performance improvement when we used Oversampling mostly on the CelebA dataset in Table 4 and on the Cifar-s dataset in Table 5. Therefore, we use Oversampling to train the multi-class prediction head in our implementation.
Aside from Oversampling, note that other sampling methods, namely Undersampling and Upweighting, could not demonstrate the same improvement in performance. Both were comparable to Oversampling on the Binary Benchmark in Table 4. However, both lagged significantly behind the multiclass benchmark in Table 5.
C HEAVY MAKEUP BENCHMARK
Prior work Hong & Yang (2021) uses the Heavy Makeup binary attribute prediction task from CelebA [20] as a benchmark for bias mitigation, where Gender is the sensitive attribute. In this experiment, Heavy Makeup’s attribute is biased toward the sensitive group: Female. In our experiments, we found that this benchmark contains significant noise. We believe this noise stems from the fact that Heavy Makeup is subjective metric. Moreover, It is influenced by cultural elements, lighting conditions, and camera pose. Thus, we expect a fair amount of label noise, which we verify via a qualitative and quantitative analysis below.
Qualtiative Analysis: We sample random 5 images from the following subgroups: Female-Heavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup (Fig 4). It is clear from the Figure that there is no firm agreement about the definition of Heavy Makeup.
Quantitative Analysis: We sample 100 random images from the following subgroups: FemaleHeavy Makeup, Female-Non-Heavy Makeup, Male-Heavy Makeup, and Male-Non-Heavy Makeup. We calculated the percentage of noisy images for each subgroup, i.e. images that are not clear whether they correctly belong to their subgroup. Observe the results in Table 6. Note that there is a significant amount of noise in each subgroup. Furthermore, the noise is amplified for the subgroups that involve the sensitive group: Female. The noise is further amplified when the test set used in Hong & Yang (2021) is examined. The test set for Male-Heavy Make up (an under-represented subgroup) only contains 9 samples. we could not visually determine whether 4 out of these 9 images fall under Heavy MakeUp. Out of the 5 left images, 3 are images of the same person from different angles. Therefore, given the noise in the training set, the small size of the under-represented group in the test set, and its noise, we conclude that results from this benchmark will be pretty noisy. Therefore, we choose to skip it in our experiments.
D MODEL AND HYPER-PARAMETERS DETAILS
We test our method (Bias Mimicking) on two benchmarks. The first benchmark is Binary Classification Benchmark that includes two datasets: CelebA (Liu et al., 2015), UTK-Face (Zhang et al., 2017) as outlined in Section 4.1. We follow the same training procedure as prior work (Hong & Yang, 2021). For both datasets, we train a Resnet-18 (He et al., 2016) model. We train the model for a total of 10 epochs on CelebA and 20 epochs on UTK-Face. For both datasets, we use a learning rate of 1e − 3 with ADAM (Kingma & Ba, 2014) optimizer. We decay the learning rate following an exponential schedule at epochs 3 and 6 for CelebA and 7 and 14 for UTK-Face with γ = 0.1 for both datasets. During training, we augment the input images through a horizental flip.
The second benchmark is a Multi-Class Classification benchmark that makes use of the CIFAR-S dataset (Wang et al., 2020b). Following prior work (Wang et al., 2020b), we train a Resnet18 model for a total of 200 epochs. We train the model with Stochastic Gradient Descent (SGD) with learning rate 0.1, momentum 0.9. We use an exponential decay scheduler at epochs 50,100, and 200 with γ = 0.1. We augment the input images using a horizontal flip and a random crop.
Our method, as outlined in Section 3.2 trains a multi-class prediction head on top of the debiased feature space where the gradients are stopped from flowing back into the main network. At each epoch, we train the layer with the same hyper-parameters and total number of epochs outlined for the main model and then reset the layer at each epoch. | 1. What is the focus and contribution of the paper regarding sampling methods?
2. What are the strengths of the proposed approach, particularly in its simplicity and performance?
3. What are the weaknesses of the paper, especially regarding the missing details and inconsistent terminology?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new sampling method that mitigates spurious correlation. Rather than using expensive model-based approaches, they complement the limitations of the dataset-based approaches. The experimental results show that the performance of the proposed method is similar to or more dominant to the performance of the model-based approaches.
Strengths And Weaknesses
[Strength]
The proposed method is simple compared to its competitors.
The proposed method achieves similar or better performance compared to its competitors.
[Weaknesses]
Some important details are missing.
They do not show why expensive model-based approaches are not practical or how much the proposed model shortened learning/inference time.
In section 4.3, there is no detail of how the percentage value x is applied to the proposed model.
In table 3, there is no detail of exactly what class 1 and class 2 are.
The experiments and terms have low uniformity.
The evaluation metrics and competitors of the binary classification benchmark and the multi-class classification benchmark are different. And there is a lack of explanation as to why they should be different.
The use of terminology is inconsistent. They use "Resnet-18" mixed with "Resnet18" and use "P_D(Y)" mixed with "P(Y)" on the same page. Also, the aforementioned is one of the small examples, and many of these are found in the paper. This could be a typo, but it's too frequent.
Clarity, Quality, Novelty And Reproducibility
Because they propose a simple method, it seems reproducible.
The use of terminology is inconsistent and descriptions of each term are insufficient.
For example, the term B appears on page 2 but comes a long way after what exactly it is.
Some important details are missing. |
ICLR | Title
Characterizing Missing Information in Deep Networks Using Backpropagated Gradients
Abstract
Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data. We attribute this vulnerability to the limitations inherent to activation-based representation. To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information. In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information. To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activationbased representations. We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes. Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets. The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.
1 INTRODUCTION
The generalizable representation of data from deep network has largely contributed to the success of deep learning in diverse applications (Bengio et al., 2013). The representation from deep networks is often obtained in the form of activation. The activation is constructed by the weights which contain specific knowledge learned from training samples. Recent studies reveal that deep networks still face robustness issues when input that cannot be properly represented by learned knowledge is given to the networks (Goodfellow et al., 2014; Hendrycks & Dietterich, 2018; Liang et al., 2017). One of the reasons for the vulnerability of deep networks is the limitation in the activation-based representation, which inherently focused on the learned knowledge. However, the part of the input that causes problems in deep networks is mainly from the information that deep networks were not able to learn from the training data. Therefore, it is more appropriate to complement the representation of input data from the perspective of information that has not been learned for enhancing the robustness of machine learning algorithms.
The gradient is another fundamental element in deep networks that is utilized to learn new information from given inputs by updating model weights (Goodfellow et al., 2016). It is generated through backpropagation to train deep networks by minimizing designed loss functions (Rumelhart et al., 1986). During the training of network, the gradient with respect to the weights provides directional information to update the deep network and learn a better representation for the inputs. In other words, gradients guide the network to learn new information that was not learned from data that it has seen so far but is presented in the current input. Considering this role during training, gradients can provide a complementary perspective with respect to activation and characterize missing information that the network has not learned for each unseen image.
We demonstrate the role of gradients with an example in Fig. 1. Assume that a deep network has only learned curved edge features from training images of the digit ‘0’. During testing, the digit ‘6’ is given to the network. The digit ‘6’ consists of both learned information (curved edges) and missing information (straight edges on top). Since the activation-based representation is constructed
based on the information that the network has already learned, the curved part of the digit ‘6’ will be characterized effectively by the activation. However, the network still has to learn the straight edge features to perform successfully on the digit ‘6’. Therefore, the gradients which guide updates in the deep network can characterize straight edge information that has not been learned.
We propose analyzing the representation capability of gradients in characterizing missing information for deep networks. Gradients have been utilized in diverse applications such as adversarial attack generation and visualization (Zeiler & Fergus, 2014; Goodfellow et al., 2014). However, using gradients with respect to weight as the representation of data has not been actively explored yet. Through the comprehensive analysis with activation-based representations, we show the effectiveness of
gradient representation in characterizing the information that has not been learned for deep network. Furthermore, we show that gradient representation can achieve state-of-the-art performance in detecting potentially invalid data for the network. The main contributions of this paper are three folds:
i We propose utilizing gradients as a representation to characterize information that has not been learned from the training data but is currently presented in the input data.
ii We analyze the representation capability of gradient compared to activation for detecting samples which possess features that have not been learned for the network.
iii We propose a gradient-based anomaly detection algorithm that outperforms state-of-the-art algorithms based on activation representations.
2 RELATED WORKS
Existing works have focused on achieving reliable activation-based representations to properly handle inputs that can cause significant performance degradation to deep networks. One of the most intuitive ways is to enhance the representation capability of activation by finetuning trained models and learning more from augmented data. Goodfellow et al. (2014); Vasiljevic et al. (2016); Temel et al. (2017) utilize adversarial images, blurred images, and distorted virtual images, respectively, to finetune networks to improve the classification performance based on the activation features. Also, several pre-processing techniques have been developed to make the representation of data similar to that of training data to achieve the robustness of algorithms. Discrete cosine transform (DCT) has been explored as a simple pre-processing technique to eliminate the effect of distortion and adversarial attacks (Hossain et al., 2018; Das et al., 2018). On the other hand, methods developed for detecting and filtering out problematic samples have focused on making the representation of such samples as distinguishable as possible from that of training images in the activation space. Liang et al. (2017) propose a gradient-based pre-processing technique to make the activation-based representation of adversarial images and out-of-distribution images dissimilar to that of training images. Furthermore, constrained activation representations have been actively utilized to make normal and abnormal images statistically separable in the activation space for abnormal data detection (Pidhorskyi et al., 2018; Perera et al., 2019; Abati et al., 2018). Aforementioned works exclusively focus on activation-based representations which are based on learned information to handle the information that has not been learned. We complement these by proposing backpropagated gradients-based representation of data which particularly focuses on what has not been learned in the trained network.
The backpropagated gradients have been utilized in diverse applications including but not limited to visualization, adversarial attacks, and image classification. The backpropagated gradients have been widely used for the visualization of deep networks. In Zeiler & Fergus (2014); Springenberg et al. (2014), information that networks have learned for a specific target class is mapped back to the pixel space through the backpropagation and visualized. Selvaraju et al. (2017) utilize the
gradient with respect to activation to weight the activation and visualize the reasoning for prediction that deep networks have made. Adversarial attack is another application of gradients. Goodfellow et al. (2014); Kurakin et al. (2016) show that adversarial attacks can be generated by adding an imperceptibly small vector which is the signum of input gradients. In Kwon et al. (2019), the backpropagated gradients are utilized to classify and objectively estimate the quality of distorted images. Several works have incorporated gradients with respect to input in the form of regularization during the training of deep networks to improve the robustness (Drucker & Le Cun, 1991; Ross & Doshi-Velez, 2018; Sokolić et al., 2017). Although existing works have shown that gradients with respect to input or activation can be useful for diverse applications, gradients with respect to weight remain almost unexplored aside from its role in training deep networks. In the following section, we demonstrate the interpretation of weight gradients to further elaborate the role of weight gradients as data representation.
3 GEOMETRIC INTERPRETATION OF GRADIENTS
We use an autoencoder which is an unsupervised representation learning framework to explain the geometric interpretation of gradients. An autoencoder consists of an encoder, fθ, and a decoder, gφ as shown in Fig. 2 (a). From an input image, x, the latent variable, z, is generated as z = fθ(x) and the reconstructed image is obtained by feeding the latent variable into the decoder, gφ(fθ(x)). The training is performed by minimizing the loss function, J(x; θ, φ), defined as follows:
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ), (1)
where L is a reconstruction error which measures the dissimilarity between the input and the reconstructed image and Ω is a regularization term for the latent variable.
We visualize the geometric interpretation of backpropagated gradients in Fig. 2 (b). The autoencoder is trained to accurately reconstruct training images and the reconstructed training images form a manifold. We assume that the structure of the manifold is a linear plane which is spanned by the weights of the encoder and the decoder as shown in the figure for the simplicity of explanation. Also, we explains the generalization of this concept in the following section. During test time, any given input to the autoencoder is projected onto the reconstructed image manifold through the projection, gφ(fθ(·)). The closer test images are to the reconstructed image manifold, the more accurately reconstructed they are. Let us assume that test data distribution is outside of the reconstructed image manifold. When the test image, xout, sampled from the test data distribution is given to the autoencoder, it will be reconstructed as x̂out by the projection, gφ(fθ(xout)). Since test images from the test data distribution have not been utilized for training, they will be poorly reconstructed. Given that the reconstructed image manifold contains data reconstructed by learned knowledge, the orthogonal gap between xout and x̂out measures the missing knowledge that the autoencoder has not learned and cannot reconstruct. Using the error defined in (1), the gradient of weights, ∂J∂θ , ∂J ∂φ , can be calculated through the backpropagation. These gradients represent required changes in the reconstructed image manifold to incorporate the test images and reconstruct them accurately. In other words, these gradients characterize orthogonal variations of the test distribution with respect to the reconstructed image manifold, which is missing information in the trained networks.
The gradients complement activation-based representation in characterizing missing information. In this setup of the autoencoder, missing information is characterized by the reconstruction error using activation. It provides distance information between test images and reconstructed images and is utilized as a primal loss to generate gradients. However, it does not provide any directional informa-
tion about the deviation of test data distribution. The gradients focus on the directional information to characterize information that has not be learned. Considering that weights contain learned knowledge from training samples, the weight gradients indicate the directional changes required to obtain new knowledge with respect to current knowledge of networks. Therefore, the gradients can provide a comprehensive perspective to represent missing knowledge in trained networks by complementing distance information from activation-based representations with the directional information.
4 GENERALIZATION OF THE GEOMETRIC INTERPRETATION FOR GRADIENTS
The gradients require a discriminant capability in high dimensional space to be an effective representation for missing information in practical scenarios of deep networks. As a representation for the information that has not been learned, the representation is expected to clearly differentiate information that has been learned and has not been learned from the input. Therefore, gradients from images with learned information should be distinguishable in terms of direction from those generated by images with new information. For instance, the gradient generated from xout in Fig. 2 is orthogonal to the reconstructed image manifold. However, test images with learned information do not result in high reconstruction errors and do not require significant changes in the reconstructed image manifold. In this case, gradients from these test images will be more tangential to the manifold and distinguishable from the gradient of xout. This separation between gradients from learned and not learned information is required to effectively characterize what is missing and not in deep networks.
In addition, gradients from the learned information should be constrained to make the direction of gradients for missing information more distinctive. In Fig. 2, the reconstructed image manifold is a two dimensional plane in three dimensional space. The gradients generated by test images with learned information are tangential to the manifold and constrained to the two dimensional space. These constrained gradients for test images with learned information allow gradients from xout to be more distinctive and characterize missing information in the network.
We design two experiments to analyze the separation between gradients from learned and missing information in the deep network. We train a convolutional autoencoder (CAE) which consists of 4-layer encoder and decoder, respectively. We use training images from Airplane class in CIFAR-10 dataset (Krizhevsky & Hinton, 2009) to train the CAE. A test set contains 1000 images from Airplane class and the same number of images randomly sampled from all other 9 classes in test split. The test Airplane class images contain learned knowledge and images from other classes contain information that has not been learned. In the first experiment, we extract the backpropagated gradients for all test images and measure the average alignment of gradients using cosine similarity (cosSIM). In Fig. 3 (a), we visualize the distribution of the average cosine similarity calculated between the gradient from each Airplane class image and that from all other remaining Airplane class images (Learned), and between each Airplane class image and test images from all other classes (Missing). We calculate the average cosine similarity over all 4 layers in decoder. Ideally for gradients to be an effective representation, cosine similarity between images with learned information should be clearly separable to that calculated between images with learned and missing information. We also calculated the number of samples in the overlapped region of histograms. A large overlapped region (793 samples out of 1000 samples) between cosine similarity from ‘Learned’ and ’Missing’ indicates that gradients in general cannot be distinguishable in high dimensional space. However, with our proposed method which will be described in the following section, the number of overlapped samples significantly decreases from 793 to 296 as shown in
Fig. 3 (b). In addition, gradients become more aligned for learned information by achieving high cosine similarity and become orthogonal for missing information. This shows that gradients become more separable for learned and missing information with the proposed approach.
In the second experiment, we analyze the constraint on gradients by measuring the alignment of them generated while the training of CAE. We train the same architecture of the CAE described in the first experiment using Airplane class images. To measure the alignment of training gradients, we calculate the cosine similarity between the gradients of a certain layer i at the k th iteration of training, ∂L∂Wi k , and the average of training gradients of layer i obtained until the k − 1 th iteration,
∂L ∂Wi k−1 avg , defined as follows:
cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k ) where ∂L ∂Wi k−1 avg = 1 (k − 1) k−1∑ t=1 ∂L ∂Wi t , (2)
W is the weight of autoencoder, i is the layer index, and superscripts are used to indicate the iteration number. We visualize the progress of the cosine similarity while training over 1000 epochs in Fig. 3 (c). We particularly focus on the cosine similarity in the 4 layers of decoder. To enhance the distinction between gradients from learned information and missing information, training gradients are expected to be constrained and aligned at the end of training by achieving high cosine similarity. However, we observe that the cosine similarity converges to 0 at the end of training. This indicates that even when most of information in training images is learned, the gradients is orthogonal to the average training gradients and remain unconstrained. On the other hand, we show our proposed method effectively constrains the gradients while training and obtain around 0.55 ∼ 0.85 cosine similarity in Fig. 3 (d). This constraint leads to the clear separability observed in Fig. 3 (b). In following section, we explain the details of our proposed method.
5 DIRECTIONAL CONSTRAINT FOR GRADIENTS
We propose a directional constraint that enables gradients to characterize missing information in deep networks. In particular, we use the cosine similarity as a measure of the gradient alignment and constrain the direction of gradients in the form of regularization term in a loss function. At k th iteration of training, the entire loss function is defined as follows:
J(x;W ) = L (x;W )− αE i
[ cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k )] , (3)
where L (x;W ) is the primal loss for a given task and α is the weight for the gradient constraint, which is the average of cosine similarity calculated for each layer. We apply this constraint on a subset of all layers. Also, we set sufficiently small α value to ensure that gradients actively explore the optimal weights until the primal loss becomes small enough. During the training, L (x;W ) is first calculated from the forward propagation. Through the backpropagation, ∂L∂Wi k is obtained without updating the weights. Based on the obtained gradient, the entire loss J is calculated and finally the weights are updated using backpropagated gradients from the loss J .
We apply this gradient constraint to the autoencoders to evaluate its performance in characterizing information that has not been learned. We only constrain the gradients from 4 layers in decoder and
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ)− α E i∈decoder
[
cosSIM
(
∂L ∂φi k−1
avg
, ∂L ∂φi
k
)]
. (4)
The first and the second terms are the reconstruction error and the latent loss, respectively, and the last term is the gradient constraint.
We utilize anomaly detection as a validation framework to both qualitatively and quantitatively evaluate the performance of activation-based and gradient-based representations. In machine learning, the abnormal data is defined as data whose classes or attributes have not been learned during training. In the anomaly detection framework, images from one class of a dataset is considered as inliers and used for the training. Images from other classes are considered as outliers and both inliers and outliers are given to the network during the test time. The anomaly detection algorithms are expected to correctly classify both inliers and outliers by effectively distinguishing learned knowledge and not learned knowledge. Most of existing works for anomaly detection are based on the activationbased representations. In particular, they utilize reconstruction error or latent loss as measures for missing information and train probabilistic models on reconstructed image space or latent space to characterize anomalies (Zong et al., 2018; Pidhorskyi et al., 2018; Abati et al., 2018).
We perform two sets of experiments to thoroughly validate the effectiveness of the gradient-based representation. In the first experiment, we analyze the capability of activation-based and gradientbased representations in characterizing missing information. To be specific, we evaluate the anomaly detection performance of the proposed gradient loss defined by the gradient constraint along with the reconstruction error and the latent loss to show the potential in each representations for characterizing missing information. We use an autoencoder with 4-layer encoder and decoder throughout whole experiments. We train CAEs using mean squared error as the reconstruction error. Also, we train variational autoencoders (VAEs) using binary cross entropy as the reconstruction error and Kullback Leibler (KL) divergence as the latent loss (Kingma & Welling, 2013). For the gradient loss, we train both CAEs and VAEs with the gradient constraint defined as the last term in (4). Finally, we utilize these losses as abnormality scores and report the anomaly detection performance. In the second experiment, we develop an anomaly detection algorithm using Gradient Constraint (GradCon) which detects anomalies by using the combination of reconstruction error and gradient loss as an abnormality score. We compare the performance of the proposed method with other benchmarking and state-of-the-art algorithms and show that GradCon outperforms all compared algorithms at least in one dataset.
We utilize four benchmarking datasets, which are CIFAR-10 (Krizhevsky & Hinton, 2009) , CURETSR (Temel et al., 2017), MNIST (LeCun et al., 1998), and fashion MNIST (fMNIST) (Xiao et al., 2017), to evaluate the performance of the proposed algorithm. CIFAR-10 dataset consists of 60,000 color images with 10 classes. CURE-TSR dataset has 637, 560 color traffic sign images which consist of 14 traffic sign types under 5 levels of 12 different challenging conditions. MNIST dataset contains 70,000 handwritten digit images from 0 to 9 and fMNIST dataset also has 10 classes of fashion products and there are 7,000 images per class. For CIFAR-10, CURE-TSR, and MNIST, we follow the protocol described in Perera et al. (2019) . We utilize the training and testing split of each dataset to conduct experiments and 10% of training images are held out for validation. For fMNIST, we follow the protocol described in Pidhorskyi et al. (2018). The dataset is split into 5 folds and 60% of each class is used for training, 20% is used for validation, the remaining 20% is used for testing. In experiments with CIFAR-10, MNIST, and fMNIST, we use images from
one class as inliers to train the network. During test, inlier images and the same number of oulier images randomly sampled from other classes are utilized. For CURE-TSR, challenge-free images are utilized as inliers to train the network. During test, challenge-free images are utilized as inliers and challenging version of these images are utilized as outliers. All the results are obtained using area under receiver operation characteristic curve (AUROC) and we also report F1 score in fMNIST dataset for the fair comparison with the state-of-the-art method (Pidhorskyi et al., 2018).
6 RESULTS
Comparison between activation-based and gradient-based representations We report the anomaly detection performance based on the gradient loss (Grad) along with reconstruction error (Recon) and the latent loss (Latent) using CIFAR-10 in Table 1. From this table, we can analyze three aspects of activation and gradient representations. First, by comparing the performance of CAE and CAE trained with gradient constraint (CAE + Grad), we analyze the effect of gradient constraint in characterizing missing information. The gradient constraint leads to a comparable average AUROC from reconstruction error and achieves the best performance from the gradient loss with an average AUROC of 0.661. Second, we evaluate the effect of latent constraint by comparing CAE and VAE. The latent loss achieves improved performance compared to the reconstruction error of CAE. However, the latent constraint imposed in the VAE sacrifices the average AUROC of reconstruction error by 0.042. Finally, comparison between VAE and VAE with gradient constraint (VAE + Grad) analyzes the effect of the gradient constraint with the activation constraint. The gradient loss in VAE + Grad achieves the second best performance by marginally sacrificing the average AUROC of reconstruction error and latent loss. From this experiment, we observe that imposing constraints on activation can degrade the performance of other activation-based representation in characterizing missing information. However, since gradients are obtained in parallel with activations, gradient constraint enables to achieve better performance in anomaly characterization by complementing the activation-based representations.
We perform anomaly detection in CURE-TSR to highlight the discriminant capability of gradient representation for diverse challenging conditions and levels. We compare the performance of CAE and CAE + Grad using the reconstruction error and the gradient loss. The goal of this task is to successfully detect whether given test image is affected by challenge condition or not. We report the performance for 8 challenging conditions and 5 levels in Fig. 4. Those challenge conditions are decolorization, lens blur, dirty lens, exposure, Gaussian blur, rain, snow, and haze as visualized in Fig. 4. For all challenging conditions and levels, CAE + Grad achieves the best performance. In particular, except for snow level 1∼3, the gradient loss achieves the best performance and for snow
level 1∼3, the reconstruction error achieves the best performance. In terms of the average AUROC over challenge levels, the gradient loss of CAE + Grad outperforms the reconstruction error of CAE by the largest margin of 0.612 in rain and the smallest margin of 0.089 in snow. Overall, the gradient representation effectively characterizes challenge information that has not been learned and outperforms the activation-based representation.
Comparison with the state-of-the-art methods GradCon is the combination of the reconstruction error and the gradient loss from a CAE trained with the gradient constraint. We train CAEs by setting α = 0.03 and Ω(z; θ, φ) = 0 in (4). This loss utilized for training is utilized as an anomaly score during testing but with the increased α of 0.12. We compare the proposed method with other algorithms including OCSVM (Schölkopf et al., 2001), KDE (Bishop, 2006), DAE (Hadsell et al., 2006), VAE (Kingma & Welling, 2013), PixelCNN (Van den Oord et al., 2016), GAN (Schlegl et al., 2017), AND (Abati et al., 2018), AnoGAN (Schlegl et al., 2017), DSVDD (Ruff et al., 2018), OCGAN (Perera et al., 2019). The AUROC results on CIFAR-10 and MNIST are reported in Table 2 and Table 3, respectively. GradCon achieves the best average AUROC performance in CIFAR-10 while achieving comparable performance to the
best algorithms in MNIST. We note that while other state-of-the-art methods train additional deep networks on top of latent space or reconstructed image space to detect anomalies, GradCon is solely based on the gradient loss without training additional networks for anomaly detection. Therefore, the proposed method is computationally efficient compared to other methods. In Fig. 5, we visualize the distribution of reconstruction error, latent loss, and gradient loss for inliers and outliers to further elaborate the reasoning for state-of-the-art performance of the proposed method. We visualize each loss for all inliers and outliers from 10 classes in MNIST and CIFAR-10. In CIFAR-10, which contains color images with more complicated structure than MNIST, activation-based losses fail to effectively separate inliers and outliers. On the the other hand, the gradient loss still maintains the separation in CIFAR-10. The gradient loss achieves the least number of samples overlapped in both MNIST and CIFAR-10, which explains the state-of-the art performance achieved by GradCon in both datasets. We also evaluate the performance of GradCon in comparison with another state-ofthe-art algorithm denoted as GPND (Pidhorskyi et al., 2018) in fMNIST. In this fMNIST experiment, we change the ratio of outliers in the test set from 10% to 50% and analyze the performance in terms of AUROC and F1 scores. The results in fMNIST are reported in Table 4. The proposed method outperforms GPND in all outlier ratios in terms of AUROC. Except for the 10% of outlier ratio test set, the proposed method achieves higher F1 scores than GPND.
7 CONCLUSION
We propose a gradient-based representation for characterizing information that deep networks have not learned. We introduce our geometric interpretation of gradients and generalize it to high dimensional scenarios of deep learning through the proposed directional constraint on gradients. We also thoroughly evaluate the representation capability of gradients compared to that of activations. We validate the effectiveness of gradients in the context of anomaly detection and show that proposed method based on the gradient representation achieves the state-of-the-art performance in four benchmarking datasets. The experimental results show that the directional information of gradients effectively characterizes diverse missing information by complementing distance information from activations. Also, the gradient-based representation can provide a comprehensive perspective to handle data that cannot be represented by training data in diverse applications aiming to ensure the robustness of deep networks. | 1. What is the main contribution of the paper regarding gradient-based representation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its explanation and experimentation?
3. How could the paper improve in terms of clarity and rigor in its presentation?
4. Are there any questions or concerns regarding the implementation and results of the proposed method?
5. How does the reviewer assess the novelty and potential impact of the paper's ideas, especially in anomaly detection tasks? | Review | Review
The authors present an interesting idea: creating representations based on gradients
with respect to the weights to supplement information missing from the training dataset.
The idea if very well motivated and gets the reader excited.
Their explanation is clear from a high-level, however, the paper as a whole
lacks rigorous justification. A lot of space is spent reviewing the role of gradients
in training, what reconstruction loss is, and the basics of back-propagation --- however,
many of their own propositions (e.g. why constrain gradients?) are given without reasoning.
The paper could be made more clear by giving details and by changing the flow. For instance,
Section 4 first gives a high-level explanation, then begins discussing specific experiments,
and then returns to the method and provides some details. Additionally, it isn't until page 6
(Section 5) that the authors introduce GradCon, one of the more promising ideas shared, this should
be a focal point in the paper and introduced early (and given more than two sentences for explanation).
The main proposal appears to be a modification of the loss function, but the paper may benefit from
discussing implementation details (for example, during training vs. testing).
Finally, the Figures (2 and 3 in particular) are not clear and need more explanation given in
the captions. Figure 3a and 3b tell an interesting story, but they're not easily digestable, nor is
the key take-away clear to the reader just by looking at the figure and caption.
The experimental results look reasonable and thorough, however the methods are sold on the
idea of better representations for data missing from the training set, whereas the results
are focused on anomaly detection. The method looks particularly promising at anomaly detection
tasks --- the authors may have a more clear paper if they focus on this aspect.
Overall, the paper presents a promising idea but it needs a more clear and rigorous presentation. |
ICLR | Title
Characterizing Missing Information in Deep Networks Using Backpropagated Gradients
Abstract
Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data. We attribute this vulnerability to the limitations inherent to activation-based representation. To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information. In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information. To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activationbased representations. We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes. Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets. The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.
1 INTRODUCTION
The generalizable representation of data from deep network has largely contributed to the success of deep learning in diverse applications (Bengio et al., 2013). The representation from deep networks is often obtained in the form of activation. The activation is constructed by the weights which contain specific knowledge learned from training samples. Recent studies reveal that deep networks still face robustness issues when input that cannot be properly represented by learned knowledge is given to the networks (Goodfellow et al., 2014; Hendrycks & Dietterich, 2018; Liang et al., 2017). One of the reasons for the vulnerability of deep networks is the limitation in the activation-based representation, which inherently focused on the learned knowledge. However, the part of the input that causes problems in deep networks is mainly from the information that deep networks were not able to learn from the training data. Therefore, it is more appropriate to complement the representation of input data from the perspective of information that has not been learned for enhancing the robustness of machine learning algorithms.
The gradient is another fundamental element in deep networks that is utilized to learn new information from given inputs by updating model weights (Goodfellow et al., 2016). It is generated through backpropagation to train deep networks by minimizing designed loss functions (Rumelhart et al., 1986). During the training of network, the gradient with respect to the weights provides directional information to update the deep network and learn a better representation for the inputs. In other words, gradients guide the network to learn new information that was not learned from data that it has seen so far but is presented in the current input. Considering this role during training, gradients can provide a complementary perspective with respect to activation and characterize missing information that the network has not learned for each unseen image.
We demonstrate the role of gradients with an example in Fig. 1. Assume that a deep network has only learned curved edge features from training images of the digit ‘0’. During testing, the digit ‘6’ is given to the network. The digit ‘6’ consists of both learned information (curved edges) and missing information (straight edges on top). Since the activation-based representation is constructed
based on the information that the network has already learned, the curved part of the digit ‘6’ will be characterized effectively by the activation. However, the network still has to learn the straight edge features to perform successfully on the digit ‘6’. Therefore, the gradients which guide updates in the deep network can characterize straight edge information that has not been learned.
We propose analyzing the representation capability of gradients in characterizing missing information for deep networks. Gradients have been utilized in diverse applications such as adversarial attack generation and visualization (Zeiler & Fergus, 2014; Goodfellow et al., 2014). However, using gradients with respect to weight as the representation of data has not been actively explored yet. Through the comprehensive analysis with activation-based representations, we show the effectiveness of
gradient representation in characterizing the information that has not been learned for deep network. Furthermore, we show that gradient representation can achieve state-of-the-art performance in detecting potentially invalid data for the network. The main contributions of this paper are three folds:
i We propose utilizing gradients as a representation to characterize information that has not been learned from the training data but is currently presented in the input data.
ii We analyze the representation capability of gradient compared to activation for detecting samples which possess features that have not been learned for the network.
iii We propose a gradient-based anomaly detection algorithm that outperforms state-of-the-art algorithms based on activation representations.
2 RELATED WORKS
Existing works have focused on achieving reliable activation-based representations to properly handle inputs that can cause significant performance degradation to deep networks. One of the most intuitive ways is to enhance the representation capability of activation by finetuning trained models and learning more from augmented data. Goodfellow et al. (2014); Vasiljevic et al. (2016); Temel et al. (2017) utilize adversarial images, blurred images, and distorted virtual images, respectively, to finetune networks to improve the classification performance based on the activation features. Also, several pre-processing techniques have been developed to make the representation of data similar to that of training data to achieve the robustness of algorithms. Discrete cosine transform (DCT) has been explored as a simple pre-processing technique to eliminate the effect of distortion and adversarial attacks (Hossain et al., 2018; Das et al., 2018). On the other hand, methods developed for detecting and filtering out problematic samples have focused on making the representation of such samples as distinguishable as possible from that of training images in the activation space. Liang et al. (2017) propose a gradient-based pre-processing technique to make the activation-based representation of adversarial images and out-of-distribution images dissimilar to that of training images. Furthermore, constrained activation representations have been actively utilized to make normal and abnormal images statistically separable in the activation space for abnormal data detection (Pidhorskyi et al., 2018; Perera et al., 2019; Abati et al., 2018). Aforementioned works exclusively focus on activation-based representations which are based on learned information to handle the information that has not been learned. We complement these by proposing backpropagated gradients-based representation of data which particularly focuses on what has not been learned in the trained network.
The backpropagated gradients have been utilized in diverse applications including but not limited to visualization, adversarial attacks, and image classification. The backpropagated gradients have been widely used for the visualization of deep networks. In Zeiler & Fergus (2014); Springenberg et al. (2014), information that networks have learned for a specific target class is mapped back to the pixel space through the backpropagation and visualized. Selvaraju et al. (2017) utilize the
gradient with respect to activation to weight the activation and visualize the reasoning for prediction that deep networks have made. Adversarial attack is another application of gradients. Goodfellow et al. (2014); Kurakin et al. (2016) show that adversarial attacks can be generated by adding an imperceptibly small vector which is the signum of input gradients. In Kwon et al. (2019), the backpropagated gradients are utilized to classify and objectively estimate the quality of distorted images. Several works have incorporated gradients with respect to input in the form of regularization during the training of deep networks to improve the robustness (Drucker & Le Cun, 1991; Ross & Doshi-Velez, 2018; Sokolić et al., 2017). Although existing works have shown that gradients with respect to input or activation can be useful for diverse applications, gradients with respect to weight remain almost unexplored aside from its role in training deep networks. In the following section, we demonstrate the interpretation of weight gradients to further elaborate the role of weight gradients as data representation.
3 GEOMETRIC INTERPRETATION OF GRADIENTS
We use an autoencoder which is an unsupervised representation learning framework to explain the geometric interpretation of gradients. An autoencoder consists of an encoder, fθ, and a decoder, gφ as shown in Fig. 2 (a). From an input image, x, the latent variable, z, is generated as z = fθ(x) and the reconstructed image is obtained by feeding the latent variable into the decoder, gφ(fθ(x)). The training is performed by minimizing the loss function, J(x; θ, φ), defined as follows:
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ), (1)
where L is a reconstruction error which measures the dissimilarity between the input and the reconstructed image and Ω is a regularization term for the latent variable.
We visualize the geometric interpretation of backpropagated gradients in Fig. 2 (b). The autoencoder is trained to accurately reconstruct training images and the reconstructed training images form a manifold. We assume that the structure of the manifold is a linear plane which is spanned by the weights of the encoder and the decoder as shown in the figure for the simplicity of explanation. Also, we explains the generalization of this concept in the following section. During test time, any given input to the autoencoder is projected onto the reconstructed image manifold through the projection, gφ(fθ(·)). The closer test images are to the reconstructed image manifold, the more accurately reconstructed they are. Let us assume that test data distribution is outside of the reconstructed image manifold. When the test image, xout, sampled from the test data distribution is given to the autoencoder, it will be reconstructed as x̂out by the projection, gφ(fθ(xout)). Since test images from the test data distribution have not been utilized for training, they will be poorly reconstructed. Given that the reconstructed image manifold contains data reconstructed by learned knowledge, the orthogonal gap between xout and x̂out measures the missing knowledge that the autoencoder has not learned and cannot reconstruct. Using the error defined in (1), the gradient of weights, ∂J∂θ , ∂J ∂φ , can be calculated through the backpropagation. These gradients represent required changes in the reconstructed image manifold to incorporate the test images and reconstruct them accurately. In other words, these gradients characterize orthogonal variations of the test distribution with respect to the reconstructed image manifold, which is missing information in the trained networks.
The gradients complement activation-based representation in characterizing missing information. In this setup of the autoencoder, missing information is characterized by the reconstruction error using activation. It provides distance information between test images and reconstructed images and is utilized as a primal loss to generate gradients. However, it does not provide any directional informa-
tion about the deviation of test data distribution. The gradients focus on the directional information to characterize information that has not be learned. Considering that weights contain learned knowledge from training samples, the weight gradients indicate the directional changes required to obtain new knowledge with respect to current knowledge of networks. Therefore, the gradients can provide a comprehensive perspective to represent missing knowledge in trained networks by complementing distance information from activation-based representations with the directional information.
4 GENERALIZATION OF THE GEOMETRIC INTERPRETATION FOR GRADIENTS
The gradients require a discriminant capability in high dimensional space to be an effective representation for missing information in practical scenarios of deep networks. As a representation for the information that has not been learned, the representation is expected to clearly differentiate information that has been learned and has not been learned from the input. Therefore, gradients from images with learned information should be distinguishable in terms of direction from those generated by images with new information. For instance, the gradient generated from xout in Fig. 2 is orthogonal to the reconstructed image manifold. However, test images with learned information do not result in high reconstruction errors and do not require significant changes in the reconstructed image manifold. In this case, gradients from these test images will be more tangential to the manifold and distinguishable from the gradient of xout. This separation between gradients from learned and not learned information is required to effectively characterize what is missing and not in deep networks.
In addition, gradients from the learned information should be constrained to make the direction of gradients for missing information more distinctive. In Fig. 2, the reconstructed image manifold is a two dimensional plane in three dimensional space. The gradients generated by test images with learned information are tangential to the manifold and constrained to the two dimensional space. These constrained gradients for test images with learned information allow gradients from xout to be more distinctive and characterize missing information in the network.
We design two experiments to analyze the separation between gradients from learned and missing information in the deep network. We train a convolutional autoencoder (CAE) which consists of 4-layer encoder and decoder, respectively. We use training images from Airplane class in CIFAR-10 dataset (Krizhevsky & Hinton, 2009) to train the CAE. A test set contains 1000 images from Airplane class and the same number of images randomly sampled from all other 9 classes in test split. The test Airplane class images contain learned knowledge and images from other classes contain information that has not been learned. In the first experiment, we extract the backpropagated gradients for all test images and measure the average alignment of gradients using cosine similarity (cosSIM). In Fig. 3 (a), we visualize the distribution of the average cosine similarity calculated between the gradient from each Airplane class image and that from all other remaining Airplane class images (Learned), and between each Airplane class image and test images from all other classes (Missing). We calculate the average cosine similarity over all 4 layers in decoder. Ideally for gradients to be an effective representation, cosine similarity between images with learned information should be clearly separable to that calculated between images with learned and missing information. We also calculated the number of samples in the overlapped region of histograms. A large overlapped region (793 samples out of 1000 samples) between cosine similarity from ‘Learned’ and ’Missing’ indicates that gradients in general cannot be distinguishable in high dimensional space. However, with our proposed method which will be described in the following section, the number of overlapped samples significantly decreases from 793 to 296 as shown in
Fig. 3 (b). In addition, gradients become more aligned for learned information by achieving high cosine similarity and become orthogonal for missing information. This shows that gradients become more separable for learned and missing information with the proposed approach.
In the second experiment, we analyze the constraint on gradients by measuring the alignment of them generated while the training of CAE. We train the same architecture of the CAE described in the first experiment using Airplane class images. To measure the alignment of training gradients, we calculate the cosine similarity between the gradients of a certain layer i at the k th iteration of training, ∂L∂Wi k , and the average of training gradients of layer i obtained until the k − 1 th iteration,
∂L ∂Wi k−1 avg , defined as follows:
cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k ) where ∂L ∂Wi k−1 avg = 1 (k − 1) k−1∑ t=1 ∂L ∂Wi t , (2)
W is the weight of autoencoder, i is the layer index, and superscripts are used to indicate the iteration number. We visualize the progress of the cosine similarity while training over 1000 epochs in Fig. 3 (c). We particularly focus on the cosine similarity in the 4 layers of decoder. To enhance the distinction between gradients from learned information and missing information, training gradients are expected to be constrained and aligned at the end of training by achieving high cosine similarity. However, we observe that the cosine similarity converges to 0 at the end of training. This indicates that even when most of information in training images is learned, the gradients is orthogonal to the average training gradients and remain unconstrained. On the other hand, we show our proposed method effectively constrains the gradients while training and obtain around 0.55 ∼ 0.85 cosine similarity in Fig. 3 (d). This constraint leads to the clear separability observed in Fig. 3 (b). In following section, we explain the details of our proposed method.
5 DIRECTIONAL CONSTRAINT FOR GRADIENTS
We propose a directional constraint that enables gradients to characterize missing information in deep networks. In particular, we use the cosine similarity as a measure of the gradient alignment and constrain the direction of gradients in the form of regularization term in a loss function. At k th iteration of training, the entire loss function is defined as follows:
J(x;W ) = L (x;W )− αE i
[ cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k )] , (3)
where L (x;W ) is the primal loss for a given task and α is the weight for the gradient constraint, which is the average of cosine similarity calculated for each layer. We apply this constraint on a subset of all layers. Also, we set sufficiently small α value to ensure that gradients actively explore the optimal weights until the primal loss becomes small enough. During the training, L (x;W ) is first calculated from the forward propagation. Through the backpropagation, ∂L∂Wi k is obtained without updating the weights. Based on the obtained gradient, the entire loss J is calculated and finally the weights are updated using backpropagated gradients from the loss J .
We apply this gradient constraint to the autoencoders to evaluate its performance in characterizing information that has not been learned. We only constrain the gradients from 4 layers in decoder and
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ)− α E i∈decoder
[
cosSIM
(
∂L ∂φi k−1
avg
, ∂L ∂φi
k
)]
. (4)
The first and the second terms are the reconstruction error and the latent loss, respectively, and the last term is the gradient constraint.
We utilize anomaly detection as a validation framework to both qualitatively and quantitatively evaluate the performance of activation-based and gradient-based representations. In machine learning, the abnormal data is defined as data whose classes or attributes have not been learned during training. In the anomaly detection framework, images from one class of a dataset is considered as inliers and used for the training. Images from other classes are considered as outliers and both inliers and outliers are given to the network during the test time. The anomaly detection algorithms are expected to correctly classify both inliers and outliers by effectively distinguishing learned knowledge and not learned knowledge. Most of existing works for anomaly detection are based on the activationbased representations. In particular, they utilize reconstruction error or latent loss as measures for missing information and train probabilistic models on reconstructed image space or latent space to characterize anomalies (Zong et al., 2018; Pidhorskyi et al., 2018; Abati et al., 2018).
We perform two sets of experiments to thoroughly validate the effectiveness of the gradient-based representation. In the first experiment, we analyze the capability of activation-based and gradientbased representations in characterizing missing information. To be specific, we evaluate the anomaly detection performance of the proposed gradient loss defined by the gradient constraint along with the reconstruction error and the latent loss to show the potential in each representations for characterizing missing information. We use an autoencoder with 4-layer encoder and decoder throughout whole experiments. We train CAEs using mean squared error as the reconstruction error. Also, we train variational autoencoders (VAEs) using binary cross entropy as the reconstruction error and Kullback Leibler (KL) divergence as the latent loss (Kingma & Welling, 2013). For the gradient loss, we train both CAEs and VAEs with the gradient constraint defined as the last term in (4). Finally, we utilize these losses as abnormality scores and report the anomaly detection performance. In the second experiment, we develop an anomaly detection algorithm using Gradient Constraint (GradCon) which detects anomalies by using the combination of reconstruction error and gradient loss as an abnormality score. We compare the performance of the proposed method with other benchmarking and state-of-the-art algorithms and show that GradCon outperforms all compared algorithms at least in one dataset.
We utilize four benchmarking datasets, which are CIFAR-10 (Krizhevsky & Hinton, 2009) , CURETSR (Temel et al., 2017), MNIST (LeCun et al., 1998), and fashion MNIST (fMNIST) (Xiao et al., 2017), to evaluate the performance of the proposed algorithm. CIFAR-10 dataset consists of 60,000 color images with 10 classes. CURE-TSR dataset has 637, 560 color traffic sign images which consist of 14 traffic sign types under 5 levels of 12 different challenging conditions. MNIST dataset contains 70,000 handwritten digit images from 0 to 9 and fMNIST dataset also has 10 classes of fashion products and there are 7,000 images per class. For CIFAR-10, CURE-TSR, and MNIST, we follow the protocol described in Perera et al. (2019) . We utilize the training and testing split of each dataset to conduct experiments and 10% of training images are held out for validation. For fMNIST, we follow the protocol described in Pidhorskyi et al. (2018). The dataset is split into 5 folds and 60% of each class is used for training, 20% is used for validation, the remaining 20% is used for testing. In experiments with CIFAR-10, MNIST, and fMNIST, we use images from
one class as inliers to train the network. During test, inlier images and the same number of oulier images randomly sampled from other classes are utilized. For CURE-TSR, challenge-free images are utilized as inliers to train the network. During test, challenge-free images are utilized as inliers and challenging version of these images are utilized as outliers. All the results are obtained using area under receiver operation characteristic curve (AUROC) and we also report F1 score in fMNIST dataset for the fair comparison with the state-of-the-art method (Pidhorskyi et al., 2018).
6 RESULTS
Comparison between activation-based and gradient-based representations We report the anomaly detection performance based on the gradient loss (Grad) along with reconstruction error (Recon) and the latent loss (Latent) using CIFAR-10 in Table 1. From this table, we can analyze three aspects of activation and gradient representations. First, by comparing the performance of CAE and CAE trained with gradient constraint (CAE + Grad), we analyze the effect of gradient constraint in characterizing missing information. The gradient constraint leads to a comparable average AUROC from reconstruction error and achieves the best performance from the gradient loss with an average AUROC of 0.661. Second, we evaluate the effect of latent constraint by comparing CAE and VAE. The latent loss achieves improved performance compared to the reconstruction error of CAE. However, the latent constraint imposed in the VAE sacrifices the average AUROC of reconstruction error by 0.042. Finally, comparison between VAE and VAE with gradient constraint (VAE + Grad) analyzes the effect of the gradient constraint with the activation constraint. The gradient loss in VAE + Grad achieves the second best performance by marginally sacrificing the average AUROC of reconstruction error and latent loss. From this experiment, we observe that imposing constraints on activation can degrade the performance of other activation-based representation in characterizing missing information. However, since gradients are obtained in parallel with activations, gradient constraint enables to achieve better performance in anomaly characterization by complementing the activation-based representations.
We perform anomaly detection in CURE-TSR to highlight the discriminant capability of gradient representation for diverse challenging conditions and levels. We compare the performance of CAE and CAE + Grad using the reconstruction error and the gradient loss. The goal of this task is to successfully detect whether given test image is affected by challenge condition or not. We report the performance for 8 challenging conditions and 5 levels in Fig. 4. Those challenge conditions are decolorization, lens blur, dirty lens, exposure, Gaussian blur, rain, snow, and haze as visualized in Fig. 4. For all challenging conditions and levels, CAE + Grad achieves the best performance. In particular, except for snow level 1∼3, the gradient loss achieves the best performance and for snow
level 1∼3, the reconstruction error achieves the best performance. In terms of the average AUROC over challenge levels, the gradient loss of CAE + Grad outperforms the reconstruction error of CAE by the largest margin of 0.612 in rain and the smallest margin of 0.089 in snow. Overall, the gradient representation effectively characterizes challenge information that has not been learned and outperforms the activation-based representation.
Comparison with the state-of-the-art methods GradCon is the combination of the reconstruction error and the gradient loss from a CAE trained with the gradient constraint. We train CAEs by setting α = 0.03 and Ω(z; θ, φ) = 0 in (4). This loss utilized for training is utilized as an anomaly score during testing but with the increased α of 0.12. We compare the proposed method with other algorithms including OCSVM (Schölkopf et al., 2001), KDE (Bishop, 2006), DAE (Hadsell et al., 2006), VAE (Kingma & Welling, 2013), PixelCNN (Van den Oord et al., 2016), GAN (Schlegl et al., 2017), AND (Abati et al., 2018), AnoGAN (Schlegl et al., 2017), DSVDD (Ruff et al., 2018), OCGAN (Perera et al., 2019). The AUROC results on CIFAR-10 and MNIST are reported in Table 2 and Table 3, respectively. GradCon achieves the best average AUROC performance in CIFAR-10 while achieving comparable performance to the
best algorithms in MNIST. We note that while other state-of-the-art methods train additional deep networks on top of latent space or reconstructed image space to detect anomalies, GradCon is solely based on the gradient loss without training additional networks for anomaly detection. Therefore, the proposed method is computationally efficient compared to other methods. In Fig. 5, we visualize the distribution of reconstruction error, latent loss, and gradient loss for inliers and outliers to further elaborate the reasoning for state-of-the-art performance of the proposed method. We visualize each loss for all inliers and outliers from 10 classes in MNIST and CIFAR-10. In CIFAR-10, which contains color images with more complicated structure than MNIST, activation-based losses fail to effectively separate inliers and outliers. On the the other hand, the gradient loss still maintains the separation in CIFAR-10. The gradient loss achieves the least number of samples overlapped in both MNIST and CIFAR-10, which explains the state-of-the art performance achieved by GradCon in both datasets. We also evaluate the performance of GradCon in comparison with another state-ofthe-art algorithm denoted as GPND (Pidhorskyi et al., 2018) in fMNIST. In this fMNIST experiment, we change the ratio of outliers in the test set from 10% to 50% and analyze the performance in terms of AUROC and F1 scores. The results in fMNIST are reported in Table 4. The proposed method outperforms GPND in all outlier ratios in terms of AUROC. Except for the 10% of outlier ratio test set, the proposed method achieves higher F1 scores than GPND.
7 CONCLUSION
We propose a gradient-based representation for characterizing information that deep networks have not learned. We introduce our geometric interpretation of gradients and generalize it to high dimensional scenarios of deep learning through the proposed directional constraint on gradients. We also thoroughly evaluate the representation capability of gradients compared to that of activations. We validate the effectiveness of gradients in the context of anomaly detection and show that proposed method based on the gradient representation achieves the state-of-the-art performance in four benchmarking datasets. The experimental results show that the directional information of gradients effectively characterizes diverse missing information by complementing distance information from activations. Also, the gradient-based representation can provide a comprehensive perspective to handle data that cannot be represented by training data in diverse applications aiming to ensure the robustness of deep networks. | 1. What is the specific contribution of the paper, and how does it differ from prior works?
2. What is the missing information that the paper claims cannot be encoded during training, and how does taking the gradient of an example with respect to model parameters help capture this information?
3. How does the proposed regularization term in (3) compare to other existing strategies like gradients with momentum or Adam?
4. How does one efficiently compute the second-order derivatives required in the objective function of (4)?
5. Why is only the decoder of the AE used for constraint gradient, and what is the reason behind this choice?
6. Can the proposed method be applied to more general tasks like image classification, and how would its performance compare to baselines?
7. Are the differences in performance between the proposed approach and baselines statistically significant, especially in tables 2 and 3? | Review | Review
[Summary]
The paper proposed to use gradient-based presentation in deep neural networks to capture missing information unavailable in the activation-based network representation. It is claimed that the missing information that could not be encoded during learning from training set can be revealed by taking the gradient of an example with respect to the model parameters. Based on this idea, a new learning algorithm is proposed to combine both the conventional loss for activation-based presentation and gradient-based regularization. As an illustration, the method is evaluated on anomaly detection benchmarks with gradient-based representation as part of the anomaly inference.
[Comments]
I’m not sure if I fully understand the claimed contribution and how it is justified either theoretically and empirically. It seems to me that the paper claims the the current activation-based network does not fully encode information from training set during learning, and taking the gradient over a novel example (either training or test) with respect to (w.r.t,) model parameters can encode the missing information. I don't quite follow a few things, and had a difficult time in interpreting the novelty here.
- What is the missing information (a model should learn but fails to capture from training set) exactly? The example in the introduction uses digit 0 for training and digit 6 for testing and claims the vertical edge in 6 is missing. But isn’t that expected if the model is only trained with digit 0s only? How would one expect the model to learn the vertical edge if there is none in the training data?
- More critically, the idea of taking the gradient of a model w.r.t.its parameters over an example to encode geometric relationship of the example in the data manifold is by no means novel at the conceptual level. There have been at least a bunch of similar classical methods derived from the perspective of information geometry. For instance, the Fisher kernel and vector have been well studied for more than a decade, with both theoretical treatment (e.g., “Exploiting Generative Models in Discriminative Classifiers” in NIPS 1998) and application across multiple areas (e.g., “Fisher Kernels on Visual Vocabularies for Image Categorization” in CVPR 2007). I don’t see any novelty here by doing this to a new model (deep neural networks), and it is frustrating to see classical methods are being “invented” over and over again by decorating them with fashionable wrappers without any reference to the origin in the literature.
- Besides, to use gradients to capture missing information, the paper proposes to append a regularization term by enforcing the gradient at particular learning iteration to be close to the mean of those of previous iterations (via cosine similarity) as in (3). This seems not quite different from many existing popular inertia-based strategies like gradients with momentum, Adam, etc. It needs to be elucidated more how the proposed optimizer compares to these benchmark methods.
In terms of implementation, I also found some details missing. For instance, the objective function of (4) requires evaluation of second order derivatives (since the second term in the cosSIM involves dL/dPhi(x, Phi)). How to efficiently compute this term is not clear to me yet. It is also mentioned that only decoder of the AE is used for constraint gradient, The reason for this is also not explained.
For evaluation, only anomaly detection examples are provided. It would be great if more general tasks like image classification can be studied. After all, the proposed method is claimed to be a fairly general framework (the introduction actually uses image classification as the example), and evaluation on the most popular benchmarks provides the most convincing justification. Even on the anomaly detection tasks, the proposed approach does not seem to have solid advantages over baselines. I’m not sure how significant are the differences of 0.007 between GradCon and OCGAN (table 2 and 3), and those less than 1% in table 4.
Overall the paper does not seem to have a clear and well-defined motivation, contribution is also vague and trivial compared to literature, plus very convincing empirical justification. Thus I do not think it is ready for publication at ICLR. |
ICLR | Title
Characterizing Missing Information in Deep Networks Using Backpropagated Gradients
Abstract
Deep networks face challenges of ensuring their robustness against inputs that cannot be effectively represented by information learned from training data. We attribute this vulnerability to the limitations inherent to activation-based representation. To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information. In addition, we propose a directional constraint on the gradients as an objective during training to improve the characterization of missing information. To validate the effectiveness of the proposed approach, we compare the anomaly detection performance of gradient-based and activationbased representations. We show that the gradient-based representation outperforms the activation-based representation by 0.093 in CIFAR-10 and 0.361 in CURE-TSR datasets in terms of AUROC averaged over all classes. Also, we propose an anomaly detection algorithm that uses the gradient-based representation, denoted as GradCon, and validate its performance on three benchmarking datasets. The proposed method outperforms the majority of the state-of-the-art algorithms in CIFAR-10, MNIST, and fMNIST datasets with an average AUROC of 0.664, 0.973, and 0.934, respectively.
1 INTRODUCTION
The generalizable representation of data from deep network has largely contributed to the success of deep learning in diverse applications (Bengio et al., 2013). The representation from deep networks is often obtained in the form of activation. The activation is constructed by the weights which contain specific knowledge learned from training samples. Recent studies reveal that deep networks still face robustness issues when input that cannot be properly represented by learned knowledge is given to the networks (Goodfellow et al., 2014; Hendrycks & Dietterich, 2018; Liang et al., 2017). One of the reasons for the vulnerability of deep networks is the limitation in the activation-based representation, which inherently focused on the learned knowledge. However, the part of the input that causes problems in deep networks is mainly from the information that deep networks were not able to learn from the training data. Therefore, it is more appropriate to complement the representation of input data from the perspective of information that has not been learned for enhancing the robustness of machine learning algorithms.
The gradient is another fundamental element in deep networks that is utilized to learn new information from given inputs by updating model weights (Goodfellow et al., 2016). It is generated through backpropagation to train deep networks by minimizing designed loss functions (Rumelhart et al., 1986). During the training of network, the gradient with respect to the weights provides directional information to update the deep network and learn a better representation for the inputs. In other words, gradients guide the network to learn new information that was not learned from data that it has seen so far but is presented in the current input. Considering this role during training, gradients can provide a complementary perspective with respect to activation and characterize missing information that the network has not learned for each unseen image.
We demonstrate the role of gradients with an example in Fig. 1. Assume that a deep network has only learned curved edge features from training images of the digit ‘0’. During testing, the digit ‘6’ is given to the network. The digit ‘6’ consists of both learned information (curved edges) and missing information (straight edges on top). Since the activation-based representation is constructed
based on the information that the network has already learned, the curved part of the digit ‘6’ will be characterized effectively by the activation. However, the network still has to learn the straight edge features to perform successfully on the digit ‘6’. Therefore, the gradients which guide updates in the deep network can characterize straight edge information that has not been learned.
We propose analyzing the representation capability of gradients in characterizing missing information for deep networks. Gradients have been utilized in diverse applications such as adversarial attack generation and visualization (Zeiler & Fergus, 2014; Goodfellow et al., 2014). However, using gradients with respect to weight as the representation of data has not been actively explored yet. Through the comprehensive analysis with activation-based representations, we show the effectiveness of
gradient representation in characterizing the information that has not been learned for deep network. Furthermore, we show that gradient representation can achieve state-of-the-art performance in detecting potentially invalid data for the network. The main contributions of this paper are three folds:
i We propose utilizing gradients as a representation to characterize information that has not been learned from the training data but is currently presented in the input data.
ii We analyze the representation capability of gradient compared to activation for detecting samples which possess features that have not been learned for the network.
iii We propose a gradient-based anomaly detection algorithm that outperforms state-of-the-art algorithms based on activation representations.
2 RELATED WORKS
Existing works have focused on achieving reliable activation-based representations to properly handle inputs that can cause significant performance degradation to deep networks. One of the most intuitive ways is to enhance the representation capability of activation by finetuning trained models and learning more from augmented data. Goodfellow et al. (2014); Vasiljevic et al. (2016); Temel et al. (2017) utilize adversarial images, blurred images, and distorted virtual images, respectively, to finetune networks to improve the classification performance based on the activation features. Also, several pre-processing techniques have been developed to make the representation of data similar to that of training data to achieve the robustness of algorithms. Discrete cosine transform (DCT) has been explored as a simple pre-processing technique to eliminate the effect of distortion and adversarial attacks (Hossain et al., 2018; Das et al., 2018). On the other hand, methods developed for detecting and filtering out problematic samples have focused on making the representation of such samples as distinguishable as possible from that of training images in the activation space. Liang et al. (2017) propose a gradient-based pre-processing technique to make the activation-based representation of adversarial images and out-of-distribution images dissimilar to that of training images. Furthermore, constrained activation representations have been actively utilized to make normal and abnormal images statistically separable in the activation space for abnormal data detection (Pidhorskyi et al., 2018; Perera et al., 2019; Abati et al., 2018). Aforementioned works exclusively focus on activation-based representations which are based on learned information to handle the information that has not been learned. We complement these by proposing backpropagated gradients-based representation of data which particularly focuses on what has not been learned in the trained network.
The backpropagated gradients have been utilized in diverse applications including but not limited to visualization, adversarial attacks, and image classification. The backpropagated gradients have been widely used for the visualization of deep networks. In Zeiler & Fergus (2014); Springenberg et al. (2014), information that networks have learned for a specific target class is mapped back to the pixel space through the backpropagation and visualized. Selvaraju et al. (2017) utilize the
gradient with respect to activation to weight the activation and visualize the reasoning for prediction that deep networks have made. Adversarial attack is another application of gradients. Goodfellow et al. (2014); Kurakin et al. (2016) show that adversarial attacks can be generated by adding an imperceptibly small vector which is the signum of input gradients. In Kwon et al. (2019), the backpropagated gradients are utilized to classify and objectively estimate the quality of distorted images. Several works have incorporated gradients with respect to input in the form of regularization during the training of deep networks to improve the robustness (Drucker & Le Cun, 1991; Ross & Doshi-Velez, 2018; Sokolić et al., 2017). Although existing works have shown that gradients with respect to input or activation can be useful for diverse applications, gradients with respect to weight remain almost unexplored aside from its role in training deep networks. In the following section, we demonstrate the interpretation of weight gradients to further elaborate the role of weight gradients as data representation.
3 GEOMETRIC INTERPRETATION OF GRADIENTS
We use an autoencoder which is an unsupervised representation learning framework to explain the geometric interpretation of gradients. An autoencoder consists of an encoder, fθ, and a decoder, gφ as shown in Fig. 2 (a). From an input image, x, the latent variable, z, is generated as z = fθ(x) and the reconstructed image is obtained by feeding the latent variable into the decoder, gφ(fθ(x)). The training is performed by minimizing the loss function, J(x; θ, φ), defined as follows:
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ), (1)
where L is a reconstruction error which measures the dissimilarity between the input and the reconstructed image and Ω is a regularization term for the latent variable.
We visualize the geometric interpretation of backpropagated gradients in Fig. 2 (b). The autoencoder is trained to accurately reconstruct training images and the reconstructed training images form a manifold. We assume that the structure of the manifold is a linear plane which is spanned by the weights of the encoder and the decoder as shown in the figure for the simplicity of explanation. Also, we explains the generalization of this concept in the following section. During test time, any given input to the autoencoder is projected onto the reconstructed image manifold through the projection, gφ(fθ(·)). The closer test images are to the reconstructed image manifold, the more accurately reconstructed they are. Let us assume that test data distribution is outside of the reconstructed image manifold. When the test image, xout, sampled from the test data distribution is given to the autoencoder, it will be reconstructed as x̂out by the projection, gφ(fθ(xout)). Since test images from the test data distribution have not been utilized for training, they will be poorly reconstructed. Given that the reconstructed image manifold contains data reconstructed by learned knowledge, the orthogonal gap between xout and x̂out measures the missing knowledge that the autoencoder has not learned and cannot reconstruct. Using the error defined in (1), the gradient of weights, ∂J∂θ , ∂J ∂φ , can be calculated through the backpropagation. These gradients represent required changes in the reconstructed image manifold to incorporate the test images and reconstruct them accurately. In other words, these gradients characterize orthogonal variations of the test distribution with respect to the reconstructed image manifold, which is missing information in the trained networks.
The gradients complement activation-based representation in characterizing missing information. In this setup of the autoencoder, missing information is characterized by the reconstruction error using activation. It provides distance information between test images and reconstructed images and is utilized as a primal loss to generate gradients. However, it does not provide any directional informa-
tion about the deviation of test data distribution. The gradients focus on the directional information to characterize information that has not be learned. Considering that weights contain learned knowledge from training samples, the weight gradients indicate the directional changes required to obtain new knowledge with respect to current knowledge of networks. Therefore, the gradients can provide a comprehensive perspective to represent missing knowledge in trained networks by complementing distance information from activation-based representations with the directional information.
4 GENERALIZATION OF THE GEOMETRIC INTERPRETATION FOR GRADIENTS
The gradients require a discriminant capability in high dimensional space to be an effective representation for missing information in practical scenarios of deep networks. As a representation for the information that has not been learned, the representation is expected to clearly differentiate information that has been learned and has not been learned from the input. Therefore, gradients from images with learned information should be distinguishable in terms of direction from those generated by images with new information. For instance, the gradient generated from xout in Fig. 2 is orthogonal to the reconstructed image manifold. However, test images with learned information do not result in high reconstruction errors and do not require significant changes in the reconstructed image manifold. In this case, gradients from these test images will be more tangential to the manifold and distinguishable from the gradient of xout. This separation between gradients from learned and not learned information is required to effectively characterize what is missing and not in deep networks.
In addition, gradients from the learned information should be constrained to make the direction of gradients for missing information more distinctive. In Fig. 2, the reconstructed image manifold is a two dimensional plane in three dimensional space. The gradients generated by test images with learned information are tangential to the manifold and constrained to the two dimensional space. These constrained gradients for test images with learned information allow gradients from xout to be more distinctive and characterize missing information in the network.
We design two experiments to analyze the separation between gradients from learned and missing information in the deep network. We train a convolutional autoencoder (CAE) which consists of 4-layer encoder and decoder, respectively. We use training images from Airplane class in CIFAR-10 dataset (Krizhevsky & Hinton, 2009) to train the CAE. A test set contains 1000 images from Airplane class and the same number of images randomly sampled from all other 9 classes in test split. The test Airplane class images contain learned knowledge and images from other classes contain information that has not been learned. In the first experiment, we extract the backpropagated gradients for all test images and measure the average alignment of gradients using cosine similarity (cosSIM). In Fig. 3 (a), we visualize the distribution of the average cosine similarity calculated between the gradient from each Airplane class image and that from all other remaining Airplane class images (Learned), and between each Airplane class image and test images from all other classes (Missing). We calculate the average cosine similarity over all 4 layers in decoder. Ideally for gradients to be an effective representation, cosine similarity between images with learned information should be clearly separable to that calculated between images with learned and missing information. We also calculated the number of samples in the overlapped region of histograms. A large overlapped region (793 samples out of 1000 samples) between cosine similarity from ‘Learned’ and ’Missing’ indicates that gradients in general cannot be distinguishable in high dimensional space. However, with our proposed method which will be described in the following section, the number of overlapped samples significantly decreases from 793 to 296 as shown in
Fig. 3 (b). In addition, gradients become more aligned for learned information by achieving high cosine similarity and become orthogonal for missing information. This shows that gradients become more separable for learned and missing information with the proposed approach.
In the second experiment, we analyze the constraint on gradients by measuring the alignment of them generated while the training of CAE. We train the same architecture of the CAE described in the first experiment using Airplane class images. To measure the alignment of training gradients, we calculate the cosine similarity between the gradients of a certain layer i at the k th iteration of training, ∂L∂Wi k , and the average of training gradients of layer i obtained until the k − 1 th iteration,
∂L ∂Wi k−1 avg , defined as follows:
cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k ) where ∂L ∂Wi k−1 avg = 1 (k − 1) k−1∑ t=1 ∂L ∂Wi t , (2)
W is the weight of autoencoder, i is the layer index, and superscripts are used to indicate the iteration number. We visualize the progress of the cosine similarity while training over 1000 epochs in Fig. 3 (c). We particularly focus on the cosine similarity in the 4 layers of decoder. To enhance the distinction between gradients from learned information and missing information, training gradients are expected to be constrained and aligned at the end of training by achieving high cosine similarity. However, we observe that the cosine similarity converges to 0 at the end of training. This indicates that even when most of information in training images is learned, the gradients is orthogonal to the average training gradients and remain unconstrained. On the other hand, we show our proposed method effectively constrains the gradients while training and obtain around 0.55 ∼ 0.85 cosine similarity in Fig. 3 (d). This constraint leads to the clear separability observed in Fig. 3 (b). In following section, we explain the details of our proposed method.
5 DIRECTIONAL CONSTRAINT FOR GRADIENTS
We propose a directional constraint that enables gradients to characterize missing information in deep networks. In particular, we use the cosine similarity as a measure of the gradient alignment and constrain the direction of gradients in the form of regularization term in a loss function. At k th iteration of training, the entire loss function is defined as follows:
J(x;W ) = L (x;W )− αE i
[ cosSIM ( ∂L ∂Wi k−1 avg , ∂L ∂Wi k )] , (3)
where L (x;W ) is the primal loss for a given task and α is the weight for the gradient constraint, which is the average of cosine similarity calculated for each layer. We apply this constraint on a subset of all layers. Also, we set sufficiently small α value to ensure that gradients actively explore the optimal weights until the primal loss becomes small enough. During the training, L (x;W ) is first calculated from the forward propagation. Through the backpropagation, ∂L∂Wi k is obtained without updating the weights. Based on the obtained gradient, the entire loss J is calculated and finally the weights are updated using backpropagated gradients from the loss J .
We apply this gradient constraint to the autoencoders to evaluate its performance in characterizing information that has not been learned. We only constrain the gradients from 4 layers in decoder and
J(x; θ, φ) = L(x, gφ(fθ(x))) + Ω(z; θ, φ)− α E i∈decoder
[
cosSIM
(
∂L ∂φi k−1
avg
, ∂L ∂φi
k
)]
. (4)
The first and the second terms are the reconstruction error and the latent loss, respectively, and the last term is the gradient constraint.
We utilize anomaly detection as a validation framework to both qualitatively and quantitatively evaluate the performance of activation-based and gradient-based representations. In machine learning, the abnormal data is defined as data whose classes or attributes have not been learned during training. In the anomaly detection framework, images from one class of a dataset is considered as inliers and used for the training. Images from other classes are considered as outliers and both inliers and outliers are given to the network during the test time. The anomaly detection algorithms are expected to correctly classify both inliers and outliers by effectively distinguishing learned knowledge and not learned knowledge. Most of existing works for anomaly detection are based on the activationbased representations. In particular, they utilize reconstruction error or latent loss as measures for missing information and train probabilistic models on reconstructed image space or latent space to characterize anomalies (Zong et al., 2018; Pidhorskyi et al., 2018; Abati et al., 2018).
We perform two sets of experiments to thoroughly validate the effectiveness of the gradient-based representation. In the first experiment, we analyze the capability of activation-based and gradientbased representations in characterizing missing information. To be specific, we evaluate the anomaly detection performance of the proposed gradient loss defined by the gradient constraint along with the reconstruction error and the latent loss to show the potential in each representations for characterizing missing information. We use an autoencoder with 4-layer encoder and decoder throughout whole experiments. We train CAEs using mean squared error as the reconstruction error. Also, we train variational autoencoders (VAEs) using binary cross entropy as the reconstruction error and Kullback Leibler (KL) divergence as the latent loss (Kingma & Welling, 2013). For the gradient loss, we train both CAEs and VAEs with the gradient constraint defined as the last term in (4). Finally, we utilize these losses as abnormality scores and report the anomaly detection performance. In the second experiment, we develop an anomaly detection algorithm using Gradient Constraint (GradCon) which detects anomalies by using the combination of reconstruction error and gradient loss as an abnormality score. We compare the performance of the proposed method with other benchmarking and state-of-the-art algorithms and show that GradCon outperforms all compared algorithms at least in one dataset.
We utilize four benchmarking datasets, which are CIFAR-10 (Krizhevsky & Hinton, 2009) , CURETSR (Temel et al., 2017), MNIST (LeCun et al., 1998), and fashion MNIST (fMNIST) (Xiao et al., 2017), to evaluate the performance of the proposed algorithm. CIFAR-10 dataset consists of 60,000 color images with 10 classes. CURE-TSR dataset has 637, 560 color traffic sign images which consist of 14 traffic sign types under 5 levels of 12 different challenging conditions. MNIST dataset contains 70,000 handwritten digit images from 0 to 9 and fMNIST dataset also has 10 classes of fashion products and there are 7,000 images per class. For CIFAR-10, CURE-TSR, and MNIST, we follow the protocol described in Perera et al. (2019) . We utilize the training and testing split of each dataset to conduct experiments and 10% of training images are held out for validation. For fMNIST, we follow the protocol described in Pidhorskyi et al. (2018). The dataset is split into 5 folds and 60% of each class is used for training, 20% is used for validation, the remaining 20% is used for testing. In experiments with CIFAR-10, MNIST, and fMNIST, we use images from
one class as inliers to train the network. During test, inlier images and the same number of oulier images randomly sampled from other classes are utilized. For CURE-TSR, challenge-free images are utilized as inliers to train the network. During test, challenge-free images are utilized as inliers and challenging version of these images are utilized as outliers. All the results are obtained using area under receiver operation characteristic curve (AUROC) and we also report F1 score in fMNIST dataset for the fair comparison with the state-of-the-art method (Pidhorskyi et al., 2018).
6 RESULTS
Comparison between activation-based and gradient-based representations We report the anomaly detection performance based on the gradient loss (Grad) along with reconstruction error (Recon) and the latent loss (Latent) using CIFAR-10 in Table 1. From this table, we can analyze three aspects of activation and gradient representations. First, by comparing the performance of CAE and CAE trained with gradient constraint (CAE + Grad), we analyze the effect of gradient constraint in characterizing missing information. The gradient constraint leads to a comparable average AUROC from reconstruction error and achieves the best performance from the gradient loss with an average AUROC of 0.661. Second, we evaluate the effect of latent constraint by comparing CAE and VAE. The latent loss achieves improved performance compared to the reconstruction error of CAE. However, the latent constraint imposed in the VAE sacrifices the average AUROC of reconstruction error by 0.042. Finally, comparison between VAE and VAE with gradient constraint (VAE + Grad) analyzes the effect of the gradient constraint with the activation constraint. The gradient loss in VAE + Grad achieves the second best performance by marginally sacrificing the average AUROC of reconstruction error and latent loss. From this experiment, we observe that imposing constraints on activation can degrade the performance of other activation-based representation in characterizing missing information. However, since gradients are obtained in parallel with activations, gradient constraint enables to achieve better performance in anomaly characterization by complementing the activation-based representations.
We perform anomaly detection in CURE-TSR to highlight the discriminant capability of gradient representation for diverse challenging conditions and levels. We compare the performance of CAE and CAE + Grad using the reconstruction error and the gradient loss. The goal of this task is to successfully detect whether given test image is affected by challenge condition or not. We report the performance for 8 challenging conditions and 5 levels in Fig. 4. Those challenge conditions are decolorization, lens blur, dirty lens, exposure, Gaussian blur, rain, snow, and haze as visualized in Fig. 4. For all challenging conditions and levels, CAE + Grad achieves the best performance. In particular, except for snow level 1∼3, the gradient loss achieves the best performance and for snow
level 1∼3, the reconstruction error achieves the best performance. In terms of the average AUROC over challenge levels, the gradient loss of CAE + Grad outperforms the reconstruction error of CAE by the largest margin of 0.612 in rain and the smallest margin of 0.089 in snow. Overall, the gradient representation effectively characterizes challenge information that has not been learned and outperforms the activation-based representation.
Comparison with the state-of-the-art methods GradCon is the combination of the reconstruction error and the gradient loss from a CAE trained with the gradient constraint. We train CAEs by setting α = 0.03 and Ω(z; θ, φ) = 0 in (4). This loss utilized for training is utilized as an anomaly score during testing but with the increased α of 0.12. We compare the proposed method with other algorithms including OCSVM (Schölkopf et al., 2001), KDE (Bishop, 2006), DAE (Hadsell et al., 2006), VAE (Kingma & Welling, 2013), PixelCNN (Van den Oord et al., 2016), GAN (Schlegl et al., 2017), AND (Abati et al., 2018), AnoGAN (Schlegl et al., 2017), DSVDD (Ruff et al., 2018), OCGAN (Perera et al., 2019). The AUROC results on CIFAR-10 and MNIST are reported in Table 2 and Table 3, respectively. GradCon achieves the best average AUROC performance in CIFAR-10 while achieving comparable performance to the
best algorithms in MNIST. We note that while other state-of-the-art methods train additional deep networks on top of latent space or reconstructed image space to detect anomalies, GradCon is solely based on the gradient loss without training additional networks for anomaly detection. Therefore, the proposed method is computationally efficient compared to other methods. In Fig. 5, we visualize the distribution of reconstruction error, latent loss, and gradient loss for inliers and outliers to further elaborate the reasoning for state-of-the-art performance of the proposed method. We visualize each loss for all inliers and outliers from 10 classes in MNIST and CIFAR-10. In CIFAR-10, which contains color images with more complicated structure than MNIST, activation-based losses fail to effectively separate inliers and outliers. On the the other hand, the gradient loss still maintains the separation in CIFAR-10. The gradient loss achieves the least number of samples overlapped in both MNIST and CIFAR-10, which explains the state-of-the art performance achieved by GradCon in both datasets. We also evaluate the performance of GradCon in comparison with another state-ofthe-art algorithm denoted as GPND (Pidhorskyi et al., 2018) in fMNIST. In this fMNIST experiment, we change the ratio of outliers in the test set from 10% to 50% and analyze the performance in terms of AUROC and F1 scores. The results in fMNIST are reported in Table 4. The proposed method outperforms GPND in all outlier ratios in terms of AUROC. Except for the 10% of outlier ratio test set, the proposed method achieves higher F1 scores than GPND.
7 CONCLUSION
We propose a gradient-based representation for characterizing information that deep networks have not learned. We introduce our geometric interpretation of gradients and generalize it to high dimensional scenarios of deep learning through the proposed directional constraint on gradients. We also thoroughly evaluate the representation capability of gradients compared to that of activations. We validate the effectiveness of gradients in the context of anomaly detection and show that proposed method based on the gradient representation achieves the state-of-the-art performance in four benchmarking datasets. The experimental results show that the directional information of gradients effectively characterizes diverse missing information by complementing distance information from activations. Also, the gradient-based representation can provide a comprehensive perspective to handle data that cannot be represented by training data in diverse applications aiming to ensure the robustness of deep networks. | 1. What is the main contribution of the paper regarding feature representation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its presentation and development phase?
3. Do you have any questions or concerns about the paper's idea, such as the use of gradients, Jacobians, or the choice of regularizer?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any analogies or connections between the proposed method and other related works in parametric statistics, such as Fisher score vectors? | Review | Review
The authors consider gradients of some loss as a feature representation. The intro starts describing a form of classification loss, but this is switched to a GAE unsupervised loss in Eq. 1. In that section the authors somewhat show that a naive gradient representation does not work well, but at the same time show that it /may/ work with their method, introduced right after, and which consists in an "orthogonality" regularizer, introduced with any arbitrary loss later in Eq.3, and back to GAE in Eq.4. The authors evaluate whether this regularizer hurst reconstruction error
I have found this paper interesting, but confusingly presented and still in a development phase (i.e. good idea, but not properly exploited yet). The premise on which the paper builds is that gradient information is more relevant than activations. Yet, this is only shown to be true if that gradient information is "bent" to follow a given orthogonality constraint in a fairly heuristic way (Eq. 4). Many choices are going under the hood:
"We train CAEs by setting α = 0.03 and Ω(z; θ, φ) = 0 in (4). This loss utilized for training is utilized as an anomaly score during testing but with the increased α of 0.12"
The sentence above is quite mysterious. The framework is tested with an anomaly detection task, which is a bit underwhelming, and to be frank, the results in Tables 3 or 4 (MNIST / CIFAR10) are not particularly convincing, specially when one factors the fact that α has been strangely changed.
Because of this, I feel that overall the paper starts with a nice idea but has not reached yet the state in which it deserves being published.
other comments:
- I think it would help to formalize from the introduction what kind of gradient you are specifically referring to. Is it the gradient of the loss XE of the classifier w.r.t. an arbitrary label? although this is clarified later in the paper, the introduction is not informative enough (Fig. 1 is nice, but a proper statement is needed). In particular, gradients is presented as an alternative to Jacobians, but shouldn't it be some form of Jacobian (or collection of gradients)
- in Eq. 1, why use *both* z and f_theta(x) in the same line, since they are equal?
- Around Eq. 1, "The training is performed by minimizing the loss function, J(x; θ, φ), defined as follows". This is inacurate, J is not minimized to compute theta and phi: the sum over J(x_i; ...) is minimized. J is just an examplar loss. You minimize the expectation of J.
- "can be calculated through the backpropagation" --> remove "the"
- Your narrative in bottom of p.3 from "We visualize the" is a bit flawed: one could imagine an example where x_out ~= \hat{x}_out, i.e. for which the loss L is small but the regularizer is large, or inversely one in which J is dominated by a small regularizer loss but where the loss L is large. So associating the size of the gradient of J to the departure from the manifold is misleading. It seems however that this is corrected later in p.5, in which you now allude only to d \mathcal{L}
- aren't there analogies with these gradients and Fisher score vectors in parametric stats? i.e. d log p_theta(x) / d theta ?
- add more caption to Fig.3 describing what's going on, or at the very least a direct reference on the section / location in which the experiments are described.
- "with our proposed method which will be described in the following section" : I would avoid this kind of "movie" narrative which tries to create a "cinematic buildup". Stick to scientific order, i.e. describe first method, test and experiment with it next.
- "is the primal loss for a given task" --> supervised or unsupervised task? |
ICLR | Title
Equivariant Energy-Guided SDE for Inverse Molecular Design
Abstract
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties. In this paper, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
1 INTRODUCTION
The discovery of new molecules with desired properties is critical in many fields, such as the drug and material design (Hajduk & Greer, 2007; Mandal et al., 2009; Kang et al., 2006; Pyzer-Knapp et al., 2015). However, brute-force search in the overwhelming molecular space is extremely challenging. Recently, inverse molecular design (Zunger, 2018) provides an efficient way to explore the molecular space, which directly predicts promising molecules that exhibit desired properties.
A natural way of inverse molecular design is to train a conditional generative model (SanchezLengeling & Aspuru-Guzik, 2018). Formally, it learns a distribution of molecules conditioned on certain properties from data, and new molecules are predicted by sampling from the distribution with the condition set to desired properties. Among them, equivariant diffusion models (EDM) (Hoogeboom et al., 2022) leverage the current state-of-art diffusion models (Ho et al., 2020), which involves a forward process to perturb data and a reverse process to generate 3D molecules conditionally or unconditionally. While EDM generates stable and valid 3D molecules, we argue that a single conditional generative model is insufficient for generating accurate molecules that exhibit desired properties (see Table 1 and Table 3 for an empirical verification).
In this work, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE formalizes the generation process as an equivariant stochastic differential equation, and plugs in energy functions to improve the controllability of generation. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations.
We apply EEGSDE to various applications by carefully designing task-specific energy functions. When targeted to quantum properties, EEGSDE is able to generate more accurate molecules than
∗Equal contribution. †Correspondence to: C. Li and J. Zhu.
EDM, e.g., reducing the mean absolute error by more than 30% on the dipole moment property. When targeted to specific molecular structures, EEGSDE better capture the structure information in molecules than EDM, e.g, improving the similarity to target structures by more than 10%. Furthermore, EEGSDE is able to generate molecules targeted to multiple properties by combining the corresponding energy functions linearly. These demonstrate that our EEGSDE enables a flexible and controllable generation of molecules, providing a smart way to explore the chemical space.
2 RELATED WORK
Diffusion models are initially proposed by Sohl-Dickstein et al. (2015). Recently, they are better understood in theory by connecting it to score matching and stochastic differential equations (SDE) (Ho et al., 2020; Song et al., 2020). After that, diffusion models have shown strong empirical performance in many applications Dhariwal & Nichol (2021); Ramesh et al. (2022); Chen et al. (2020); Kong et al. (2020). There are also variants proposed to improve or accelerate diffusion models (Nichol & Dhariwal, 2021; Vahdat et al., 2021; Dockhorn et al., 2021; Bao et al., 2022b;a; Salimans & Ho, 2022; Lu et al., 2022).
Guidance is a technique to control the generation process of diffusion models. Initially, Song et al. (2020); Dhariwal & Nichol (2021) use classifier guidance to generate samples belonging to a class. Then, the guidance is extended to CLIP (Radford et al., 2021) for text to image generation, and semantic-aware energy (Zhao et al., 2022) for image-to-image translation. Prior guidance methods focus on image data, and are nontrivial to apply to molecules, since they do not consider the geometric symmetry. In contrast, our work proposes a general guidance framework for 3D molecules, where an invariant energy function is employed to leverage the geometric symmetry of molecules.
Molecule generation. Several works attempt to model molecules as 3D objects via deep generative models Nesterov et al. (2020); Gebauer et al. (2019); Satorras et al. (2021a); Hoffmann & Noé (2019); Hoogeboom et al. (2022). Among them, the most relevant one is the equivariant diffusion model (EDM) (Hoogeboom et al., 2022), which generates molecules in an iterative denoising manner. Benefiting from recent advances of diffusion models, EDM is stable to train and is able to generate high quality molecules. We provide a formal description of EDM in Section 3. Some other methods generate simplified representations of molecules, such as 1D SMILES strings (Weininger, 1988) and 2D graphs of molecules. These include variational autoencoders (Kusner et al., 2017; Dai et al., 2018; Jin et al., 2018; Simonovsky & Komodakis, 2018; Liu et al., 2018), normalizing flows (Madhawa et al., 2019; Zang & Wang, 2020; Luo et al., 2021), generative adversarial networks (Bian et al., 2019; Assouel et al., 2018), and autoregressive models (Popova et al., 2019; Flam-Shepherd et al., 2021). There are also methods on generating torsion angles in molecules. For
instance, Torsional Diffusion (Jing et al., 2022) employs the SDE formulation of diffusion models to model torsion angles in a given 2D molecular graph for the conformation generation task.
Inverse molecular design. Generative models have been applied to inverse molecular design. For example, conditional autoregressive models (Gebauer et al., 2022) and EDM (Hoogeboom et al., 2022) directly generate 3D molecules with desired quantum properties. Gebauer et al. (2019) also finetune pretrained generative models on a biased subset to generate 3D molecules with small HOMO-LUMO gaps. In contrast to these conditional generative models, our work further proposes a guidance method, a flexible way to control the generation process of molecules. Some other methods apply optimization methods to search molecules with desired properties, such as reinforcement learning (Zhou et al., 2019; You et al., 2018) and genetic algorithms (Jensen, 2019; Nigam et al., 2019). These optimization methods generally consider the 1D SMILES strings or 2D graphs of molecules, and the 3D information is not provided.
3 BACKGROUND
3D representation of molecules. Suppose a molecule has M atoms and let xi ∈ Rn (n = 3 in general) be the coordinate of the ith atom. The collection of coordinates x = (x1, . . . ,xM ) ∈ RMn determines the conformation of the molecule. In addition to the coordinate, each atom is also associated with an atom feature, e.g., the atom type. We use hi ∈ Rd to represent the atom feature of the ith atom, and use h = (h1, . . . ,hM ) ∈ RMd to represent the collection of atom features in a molecule. We use a tuple z = (x,h) to represent a molecule, which contains both the 3D geometry information and the atom feature information.
Equivariance and invariance. Suppose R is a transformation. A distribution p(x,h) is said to be invariant to R, if p(x,h) = p(Rx,h) holds for all x and h. Here Rx = (Rx1, . . . ,RxM ) is applied to each coordinate. A function (ax,ah) = f(x,h) that have two components ax,ah in its output is said to be equivariant to R, if f(Rx,h) = (Rax,ah) holds for all x and h. A function f(x,h) is said to be invariant to R, if f(Rx,h) = f(x,h) holds for all x and h.
Zero CoM subspace. It has been shown that the invariance to translational and rotational transformations is an important factor for the success of 3D molecule modeling (Köhler et al., 2020; Xu et al., 2022). However, the translational invariance is impossible for a distribution in the full space RMn (Satorras et al., 2021a). Nevertheless, we can view two collections of coordinates x and y as equivalent if x can be translated from y, since the translation doesn’t change the identity of a molecule. Such an equivalence relation partitions the whole space RMn into disjoint equivalence classes. Indeed, all elements in the same equivalence classes represent the same conformation, and we can use the element with zero center of mass (CoM), i.e., 1M ∑M i=1 x
i = 0, as the specific representation. These elements collectively form the zero CoM linear subspace X (Xu et al., 2022; Hoogeboom et al., 2022), and the rest of the paper always uses elements in X to represent conformations.
Equivariant graph neural network. Satorras et al. (2021b) propose equivariant graph neural networks (EGNNs), which incorporate the equivariance inductive bias into neural networks. Specifically, (ax,ah) = EGNN(x,h) is a composition of L equivariant convolutional layers. The l-th layer takes the tuple (xl,hl) as the input and outputs an updated version (xl+1,hl+1), as follows:
mij = Φm(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θm), w ij = Φw(m ij ;θw), h i l+1 = Φh(h i l, ∑ j ̸=i wijmij ;θh),
xil+1 = x i l + ∑ j ̸=i
xil − x j l
∥xil − x j l ∥2 + 1
Φx(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θx),
where Φm,Φw,Φh,Φx are parameterized by fully connected neural networks with parameters θm,θw,θh,θx respectively, and eij are optional feature attributes. We can verify that these layers are equivariant to orthogonal transformations, which include rotational transformations as special cases. As their composition, the EGNN is also equivariant to orthogonal transformations. Furthermore, let EGNNh(x,h) = ah, i.e., the second component in the output of the EGNN. Then EGNNh(x,h) is invariant to orthogonal transformations.
Equivariant diffusion models (EDM) (Hoogeboom et al., 2022) are a variant of diffusion models for molecule data. EDMs gradually inject noise to the molecule z = (x,h) via a forward process
q(z1:N |z0)= N∏ n=1 q(zn|zn−1), q(zn|zn−1) = NX(xn| √ αnxn−1, βn)N (hn| √ αnhn−1, βn), (1)
where αn and βn represent the noise schedule and satisfy αn + βn = 1, and NX represent the Gaussian distribution in the zero CoM subspace X (see its formal definition in Appendix A.2). Let αn = α1α2 · · ·αn, βn = 1− αn and β̃n = βnβn−1/βn. To generate samples, the forward process is reversed using a Markov chain:
p(z0:N )=p(zN ) N∏ n=1 p(zn−1|zn), p(zn−1|zn)=NX(xn−1|µxn(zn), β̃n)N(hn−1|µhn(zn), β̃n). (2)
Here p(zN ) = NX(xN |0, 1)N (hN |0, 1). The mean µn(zn) = (µxn(zn),µhn(zn)) is parameterized by a noise prediction network ϵθ(zn, n), and is trained using a MSE loss, as follows:
µn(zn) = 1√ αn (zn − βn√ βn ϵθ(zn, n)), min θ EnEq(z0,zn)w(t)∥ϵθ(zn, n)− ϵn∥ 2,
where ϵn = zn− √ αnz0√ βn is the standard Gaussian noise injected to z0 and w(t) is the weight term. Hoogeboom et al. (2022) show that the distribution of generated samples p(z0) is invariant to rotational transformations if the noise prediction network is equivariant to orthogonal transformations. In Section 4.2, we extend this proposition to the SDE formulation of molecular diffusion modelling.
Hoogeboom et al. (2022) also present a conditional version of EDM for inverse molecular design by adding an extra input of the condition c to the noise prediction network as ϵθ(zn, c, n).
4 EQUIVARIANT ENERGY-GUIDED SDE
In this part, we introduce our equivariant energy-guided SDE (EEGSDE), as illustrated in Figure 1. EEGSDE is based on the SDE formulation of molecular diffusion modeling, which is described in Section 4.1 and Section 4.2. Then, we formally present our EEGSDE that incorporates an energy function to guide the molecular generation in Section 4.3. We provide derivations in Appendix A.
4.1 SDE IN THE PRODUCT SPACE
Recall that a molecule is represented as a tuple z = (x,h), where x = (x1, . . . ,xM ) ∈ X represents the conformation and h = (h1, . . . ,hM ) ∈ RMd represents atom features. Here X = {x ∈ RMn : 1M ∑M i=1 x
i = 0} is the zero CoM subspace mentioned in Section 3, and d is the feature dimension. We first introduce a continuous-time diffusion process {zt}0≤t≤T in the product spaceX×RMd, which gradually adds noise to x and h. This can be described by the forward SDE:
dz = f(t)zdt+ g(t)d(wx,wh), z0 ∼ q(z0), (3)
where f(t) and g(t) are two scalar functions, wx and wh are independent standard Wiener processes in X and RMd respectively, and the SDE starts from the data distribution q(z0). Note that wx can be constructed by subtracting the CoM of a standard Wiener process w in RMn, i.e., wx = w−w, where w = 1M ∑M i=1 w
i is the CoM of w = (w1, . . . ,wM ). It can be shown that the SDE has a linear Gaussian transition kernel q(zt|zs) = q(xt|xs)q(ht|hs) from xs to xt, where 0 ≤ s < t ≤ T . Specifically, there exists two scalars αt|s and βt|s, s.t., q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here NX denotes the Gaussian distribution in the subspace X , and see Appendix A.2 for its formal definition. Indeed, the forward process of EDM in Eq. (1) is a discretization of the forward SDE in Eq. (3).
To generate molecules, we reverse Eq. (3) from T to 0. Such a time reversal forms another a SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (4)
Here qt(z) is the marginal distribution of zt, ∇x log qt(z) is the gradient of log qt(z) w.r.t. x1, ∇x log qt(z) = 1M ∑M i=1∇xi log qt(z) is the CoM of ∇x log qt(z), dt is the infinitesimal negative timestep, w̃x and w̃h are independent reverse-time standard Wiener processes in X and RMd respectively, and ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise injected to z0. Compared to the
original SDE introduced by Song et al. (2020), our reverse SDE in Eq. (4) additionally subtracts the CoM of∇x log qt(z). This ensures xt always stays in the zero CoM subspace as time flows back. To sample from the reverse SDE in Eq. (4), we use a noise prediction network ϵθ(zt, t) to estimate Eq(z0|zt)ϵt, through minimizing the MSE loss minθ EtEq(z0,zt)w(t)∥ϵθ(zt, t) − ϵt∥2, where t is uniformly sampled from [0, T ], and w(t) controls the weight of the loss term at time t. Note that the noise ϵt is in the product space X × RMd, so we subtract the CoM of the predicted noise of xt to ensure ϵθ(zt, t) is also in the product space.
Substituting ϵθ(zt, t) into Eq. (4), we get an approximate reverse-time SDE parameterized by θ:
dz = [f(t)z + g(t)2√ βt|0 ϵθ(z, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (5)
where pT (zT ) = NX(xT |0, 1)N (hT |0, 1) is a Gaussian prior in the product space that approximates qT (zT ). We define pθ(z0) as the marginal distribution of Eq. (5) at time t = 0, which is the distribution of our generated samples. Similarly to the forward process, the reverse process of EDM in Eq. (2) is a discretization of the reverse SDE in Eq. (5).
4.2 EQUIVARIANT SDE
To leverage the geometric symmetry in 3D molecular conformation, pθ(z0) should be invariant to translational and rotational transformations. As mentioned in Section 3, the translational invariance of pθ(z0) is already satisfied by considering the zero CoM subspace. The rotational invariance can be satisfied if the noise prediction network is equivariant to orthogonal transformations, as summarized in the following theorem: Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
As mentioned in Section 3, the EGNN satisfies the equivariance constraint, and we parameterize ϵθ(zt, t) using an EGNN following Hoogeboom et al. (2022). See details in Appendix D.
4.3 EQUIVARIANT ENERGY-GUIDED SDE
Now we describe equivariant energy-guided SDE (EEGSDE), which guides the generated molecules of Eq. (5) towards desired properties c by leveraging a time-dependent energy function E(z, c, t):
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t))︸ ︷︷ ︸ energy gradient taken in the product space )]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (6)
1While qt(z) is defined in X ×RMd, its domain can be extended to RMn ×RMd and the gradient is valid. See Remark 1 in Appendix A.2 for details.
which defines a distribution pθ(z0|c) conditioned on the property c. Here the CoM ∇xE(z, c, t) of the gradient is subtracted to keep the SDE in the product space, which ensures the translational invariance of pθ(z0|c). Besides, the rotational invariance is satisfied by using energy invariant to orthogonal transformations, as summarized in the following theorem:
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Note that we can also use a conditional model ϵθ(z, c, t) in Eq. (6). See Appendix C for details. To sample from pθ(z0|c), various solvers can be used for Eq. (6), such as the Euler-Maruyama method (Song et al., 2020) and the Analytic-DPM sampler (Bao et al., 2022b;a). We present the Euler-Maruyama method as an example in Algorithm 1 at Appendix B.
4.4 HOW TO DESIGN THE ENERGY FUNCTION
Our EEGSDE is a general framework, which can be applied to various applications by specifying different energy functions E(z, c, t). For example, we can design the energy function according to consistency between the molecule z and the property c, where a low energy represents a well consistency. As the generation process in Eq. (6) proceeds, the gradient of the energy function encourages generated molecules to have a low energy, and consequently a well consistency. Thus, we can expect the generated molecule z aligns well with the property c. In the rest of the paper, we specify the choice of energy functions, and show these energies improve controllable molecule generation targeted to quantum properties, molecular structures, and even a combination of them.
Remark. The term “energy” in this paper refers to a general notion in statistical machine learning, which is a scalar function that captures dependencies between input variables (LeCun et al., 2006). Thus, the “energy” in this paper can be set to a MSE loss when we want to capture how the molecule align with the property (as done in Section 5). Also, the “energy” in this paper does not exclude potential energy or free energy in chemistry, and they might be applicable when we want to generate molecules with small potential energy or free energy.
5 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
Let c ∈ R be a certain quantum property. To generate molecules with the desired property, we set the energy function as the squared error between the predicted property and the desired property E(zt, c, t) = s|g(zt, t)− c|2, where g(zt, t) is a time-dependent property prediction model, and s is the scaling factor controlling the strength of the guidance. Specifically, g(zt, t) can be parameterized by equivariant models such as EGNN (Satorras et al., 2021b), SE3-Transformer (Fuchs et al., 2020) and DimeNet (Klicpera et al., 2020) to ensure the invariance of E(zt, c, t), as long as they perform well in the task of property prediction. In this paper we consider EGNN. We provide details on parameterization and the training objective in Appendix E. We can also generate molecules targeted to multiple quantum properties by combining energy functions linearly (see details in Appendix F.1).
5.1 SETUP
We evaluate on QM9 (Ramakrishnan et al., 2014), which contains quantum properties and coordinates of ∼130k molecules with up to nine heavy atoms from (C, N, O, F). Following EDM, we split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively. The training set is further divided into two non-overlapping halves Da, Db equally. The noise prediction network and the time-dependent property prediction model of the energy function are trained on Db separately. By default, EEGSDE uses a conditional noise prediction network ϵθ(z, c, t) in Eq. (6), since we find it generate more accurate molecules than using an unconditional one (see Appendix G.1 for an ablation study). See more details in Appendix F.3.
Evaluation metric: Following EDM, we use the mean absolute error (MAE) to evaluate how generated molecules align with the desired property. Specifically, we train another property prediction model ϕp (Satorras et al., 2021b) onDa, and the MAE is calculated as 1K ∑K i=1 |ϕp(zi)−ci|, where zi is a generated molecule, ci is its desired property. We generate K=10,000 samples for evaluation with ϕp. For fairness, the property prediction model ϕp is different from g(zt, t) used in the energy
function, and they are trained on the two non-overlapping training subsetsDa, Db respectively. This ensures no information leak occurs when evaluating our EEGSDE. To further verify the effectiveness of EEGSDE, we also calculate the MAE without relying on a neural network. Specifically, we use the Gaussian software (which calculates properties according to theories of quantum chemistry without using neural networks) to calculate the properties of 100 generated molecules. For completeness, we also report the novelty, the atom stability and the molecule stability following Hoogeboom et al. (2022) in Appendix G.2, although they are not our main focus.
Baseline: The most direct baseline is conditional EDM, which only adopts a conditional noise prediction network. We also compare two additional baselines “U-bound” and “#Atoms” from Hoogeboom et al. (2022) (see Appendix F.2 for details).
5.2 RESULTS
Following Hoogeboom et al. (2022), we consider six quantum properties in QM9: polarizability α, highest occupied molecular orbital energy εHOMO, lowest unoccupied molecular orbital energy εLUMO, HOMO-LUMO gap ∆ε, dipole moment µ and heat capacity Cv . Firstly, we generate
molecules targeted to one of these six properties. As shown in Table 1, with the energy guidance, our EEGSDE has a significantly better MAE than the conditional EDM on all properties. Remarkably, with a proper scaling factor s, the MAE of EEGSDE is reduced by more than 25% compared to conditional EDM on properties ∆ε, εLUMO, and more than 30% on µ. What’s more, as shown in Table 2, our EEGSDE still has better MAE under the evaluation by the Gaussian software, which further verifies the effectiveness of our EEGSDE. We further generate molecules targeted to multiple quantum properties by combining energy functions linearly. As shown in Table 3, our EEGSDE still has a significantly better MAE than the conditional EDM.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the desired properties than molecules generated by the conditional EDM baseline (for both the singleproperty and multiple-properties cases). As a consequence, EEGSDE is able to explore the chemical space in a guided way to generate promising molecules for downstream applications such as the virtual screening, which may benefit drug and material discovery.
6 GENERATING MOLECULES WITH TARGET STRUCTURES
Following Gebauer et al. (2022), we use the molecular fingerprint to encode the structure information of a molecule. The molecular fingerprint c = (c1, . . . , cL) is a series of bits that capture the presence or absence of substructures in the molecule. Specifically, a substructure is mapped to a specific position l in the bitmap, and the corresponding bit cl will be 1 if the substructure exists in the molecule and will be 0 otherwise. To generate molecules with a specific structure (encoded by the fingerprint c), we set the energy function as the squared errorE(zt, c, t) = s∥m(zt, t)−c∥2 between a time-dependent multi-label classifier m(zt, t) and c. Here s is the scaling factor, and m(zt, t) is trained with binary cross entropy loss to predict the fingerprint as detailed in Appendix E.2. Note that the choice of the energy function is flexible and can be different to the training loss of m(zt, t). In initial experiments, we also try binary cross entropy loss for the energy function, but we find it causes the generation process unstable. The multi-label classifierm(zt, t) is parameterized in a similar way to the property prediction model g(zt, t) in Section 5, and we present details in Appendix E.1.
6.1 SETUP
We evaluate on QM9 and GEOM-Drug. We train our method (including the noise prediction network and the multi-label classifier) on the whole training set. By default, we use a conditional noise prediction network ϵθ(z, c, t) in Eq. (6) for a better performance. See more details in Appendix F.4.
Evaluation metric: To measure how the structure of a generated molecule aligns with the target one, we use the Tanimoto similarity (Gebauer et al., 2022), which captures similarity between structures by comparing their fingerprints.
Baseline: The most direct baseline is conditional EDM (Hoogeboom et al., 2022), which only adopts a conditional noise prediction network without the guidance of an energy model. We also consider cG-SchNet (Gebauer et al., 2022), which generates molecules in an autoregressive manner.
6.2 RESULTS
As shown in Table 4, EEGSDE significantly improves the similarity between target structures and generated structures compared to conditional EDM and cG-SchNet on QM9. Also note in a proper range, a larger scaling factor results in a better similarity, and EEGSDE with s=1 improves the sim-
ilarity by more than 10% compared to conditional EDM. In Figure 2, we plot generated molecules of conditional EDM and EEGSDE (s=1) targeted to specific structures, where our EEGSDE aligns better with them. We further visualize the effect of the scaling factor in Appendix G.3, where the generated structures align better as the scaling factor grows. These results demonstrate that our EEGSDE captures the structure information in molecules well.
We also perform experiments on the more challenging GEOM-Drug (Axelrod & Gomez-Bombarelli, 2022) dataset, and we train the conditional EDM baseline following the default setting of Hoogeboom et al. (2022). As shown in Table 4, we find the conditional EDM baseline has a similarity of 0.165, which is much lower than the value on QM9. We hypothesize this is because molecules in GEOM-Drug has much more atoms than QM9 with a more complex structure, and the default setting in Hoogeboom et al. (2022) is suboptimal. For example, the conditional EDM on GEOM-Drug has a smaller number of parameters than the conditional EDM on QM9 (15M v.s. 26M), which is insufficient to capture the structure information. Nevertheless, our EEGSDE still improves the similarity by ∼%17. We provide generated molecules on GEOM-Drug in Appendix G.5. Finally, we demonstrate that our EEGSDE is a flexible framework to generate molecules targeted to multiple properties, which is often the practical case. We additionally target to the quantum property α (polarizability) on QM9 by combining the energy function for structures in this section and the energy function for quantum properties in Section 5. Here we choose α = 100 Bohr3, which is a relatively large value, and we expect it to encourage less isometrically shaped structures. As shown in Figure 3, the generated molecule aligns better with the target structure as the scaling factor s1 grows, and meanwhile a ring substructure in the generated molecule vanishes as the scaling factor for polarizability s2 grows, leading to a less isometrically shaped structure, which is as expected.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the target structure than molecules generated by the conditional EDM baseline. Besides, EEGSDE can generate molecules targeted to both specific structures and desired quantum properties by combining energy functions linearly. As a result, EEGSDE may benefit practical cases in molecular design when multiple properties should be considered at the same time.
7 CONCLUSION
This work presents equivariant energy-guided SDE (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. EEGSDE significantly improves the conditional EDM baseline in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
ACKNOWLEDGMENTS
We thank Han Guo and Zhen Jia for their help with their expertise in chemistry. This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098; a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13). J.Z was also supported by the XPlorer Prize. C. Li was also sponsored by Beijing Nova Program.
ETHICS STATEMENT
Inverse molecular design is critical in fields like material science and drug discovery. Our EEGSDE is a flexible framework for inverse molecular design and thus might benefit these fields. Currently the negative consequences are not obvious.
REPRODUCIBILITY STATEMENT
Our code is included in the supplementary material. The implementation of our experiment is described in Section 5 and Section 6. Further details such as the hyperparameters of training and model backbones are provided in Appendix F. We provide complete proofs and derivations of all theoretical results in Appendix A.
A DERIVATIONS
A.1 ZERO COM SUBSPACE
The zero CoM subspace X = {x ∈ RMn : ∑M i=1 x
i = 0} is a (M − 1)n dimensional subspace of RMn. Therefore, there exists an isometric isomorphism ϕ from R(M−1)n to X , i.e., ϕ is a linear bijection from R(M−1)n to X , and ∥ϕ(x̂)∥2 = ∥x̂∥2 for all x̂ ∈ R(M−1)n. We use Aϕ ∈ RMn×(M−1)n represent the matrix corresponding to ϕ, so we have ϕ(x̂) = Aϕx̂. An important property of the isometric isomorphism is that AϕA⊤ϕx = x − x for all x ∈ RMn. We show the proof as following. Proposition 1. Suppose ϕ is an isometric isomorphism from R(M−1)n to X , and let Aϕ ∈ RMn×(M−1)n be the matrix corresponding to ϕ. Then we have AϕA⊤ϕx = x−x for all x ∈ RMn, where x = 1M ∑M i=1 x i.
Proof. We consider a new subspace of RMn, X⊥ = {x ∈ RMn : x ⊥ X}, i.e., the orthogonal component of X . We can verify that X⊥ = {x ∈ RMn : x1 = x2 = · · · = xM}, and X⊥ is n dimensional. Thus, there exists an isometric isomorphism ψ from Rn toX⊥. LetAψ ∈ RMn×n represent the matrix corresponding to ψ. Then we define λ(x̂, ŷ) = ϕ(x̂)+ψ(ŷ), where x̂ ∈ R(M−1)n and ŷ ∈ Rn. The image of λ is {x + y : x ∈ X,y ∈ X⊥} = RMn. Therefore, λ is a linear bijection from RMn to RMn, and the matrix corresponding to λ is Aλ = [Aϕ, Aψ]. Furthermore, λ is an isometric isomorphism, since ∥λ(x̂, ŷ)∥2 = ∥ϕ(x̂)∥2 + ∥ψ(ŷ)∥2 + 2 ⟨ϕ(x̂), ψ(ŷ)⟩ = ∥x̂∥2 + ∥ŷ∥2 = ∥(x̂⊤, ŷ⊤)⊤∥2. This means Aλ is orthogonal transformation. Therefore, AλA ⊤ λ = AϕA ⊤ ϕx + AψA ⊤ ψ = I and AϕA ⊤ ϕx + AψA ⊤ ψx = x. Since AϕA ⊤ ϕx ∈ X and AψA ⊤ ψx ∈ X⊤, we can conclude that AϕA⊤ϕx is the orthogonal projection of x to X , which is exactly x− x.
Since R(M−1)n and X are two intrinsically equivalent spaces, an equivalence of distributions in these two spaces can also be established, as shown in the following propositions. Proposition 2. Suppose x is a random vector distributed in X , and x̂ = ϕ−1(x) is its equivalent representation in R(M−1)n. If x ∼ q(x), then x̂ ∼ q̂(x̂), where q̂(x̂) = q(ϕ(x̂)). Proposition 3. Suppose x,y is are two random vectors distributed in X , and x̂ = ϕ−1(x), ŷ = ϕ−1(y) are their equivalent representations in R(M−1)n. If x|y ∼ q(x|y), then x̂|ŷ ∼ q̂(x̂|ŷ), where q̂(x̂|ŷ) = q(ϕ(x̂)|ϕ(ŷ))).
A.2 SDE IN THE PRODUCT SPACE
Definition 1 (Gaussian distributions in the zero CoM subspace). Suppose µ ∈ X . Let NX(x|µ, σ2) := (2πσ2)−(M−1)n/2 exp(− 12σ2 ∥x−µ∥ 2 2), which is the isotropic Gaussian distribution with mean µ and variance σ2 in the zero CoM subspace X .
Proposition 4 (Transition kernels of a SDE in the zero CoM subspace). Suppose dx = f(t)xdt+ g(t)dwx, x0 ∼ q(x0) is a SDE in the zero CoM subspace. Then the transition kernel from xs to xt (0 ≤ s < t ≤ T ) can be expressed as q(xt|xs) = NX(xt| √ αt|sxs, βt|s), where αt|s =
exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·).
Proof. Firstly, we map the process {xt}Tt=0 in the zero CoM subspace to the equivalent space R(M−1)n through the isometric isomorphism ϕ introduced in Appendix A.1. This produces a new process {x̂}Tt=0, where x̂t = ϕ−1(xt). By applying ϕ−1 to the SDE in the zero CoM subspace, we know x̂t = ϕ−1(xt) satisfies the following SDE in R(M−1)n:
dx̂ = f(t)x̂dt+ g(t)dŵ, x̂0 ∼ q̂(x̂0), (7) where ŵ is the standard Wiener process in R(M−1)n and q̂(x̂0) = q(ϕ(x̂0)). According to Song et al. (2020), the transition of Eq. (7) from x̂s to x̂t (s < t) can be expressed as q̂(x̂t|x̂s) = N (x̂t| √ αt|sx̂s, βt|sI), where αt|s = exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·). According to Proposition 3, we know the transition kernel q(xt|xs) from xs to xt satisfies
q(xt|xs) =q̂(ϕ−1(xt)|ϕ−1(xs)) = (2πβt|s)−(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt)−
√ αt|sϕ −1(xs)∥22)
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt −
√ αt|sxs)∥22) // linearity of ϕ−1
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥xt −
√ αt|sxs∥22) // norm preserving of ϕ−1
=NX(xt| √ αt|sxs, βt|s).
Proposition 5 (Transition kernels of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then the transition kernel from zs to zt (0 ≤ s < t ≤ T ) can be expressed as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here αt|s and βt|s are defined as in Proposition 4.
Proof. Since wx and wh are independent to each other, the transition kernel from zs to zt can be factorized as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) is the transition kernel of dx = f(t)xdt+ g(t)dwx and q(ht|hs) is the transition kernel of dh = f(t)hdt+ g(t)dwh. According to Proposition 4 and Song et al. (2020), we have q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s).
Remark 1. The marginal distribution of zt is
qt(zt) = ∫ Z ∫ RMd q(z0)q(xt|x0)q(ht|h0)dh0λ(dx0),
where λ is the Lebesgue measure in the zero CoM subspace X . While qt(zt) is a distribution in X × RMd, it has a natural differentiable extension to RMn × RMd, since q(xt|x0) = NX(xt| √ αt|sxs, βt|s) has a differentiable extension to RMn according to Definition 1. Thus, we can take gradient of qt(zt) w.r.t. xt in the whole space RMn. Proposition 6 (Time reversal of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then its time reversal satisfies the following reverse-time SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ),
where w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and ϵt =
zt−√αt|0z0√ βt|0 is the standard Gaussian noise in X × RMd injected to z0.
Furthermore, we have∇x log qt(x)−∇x log qt(x) = − 1√ βt|0
Eq(x0|xt)ϵt, where ϵt = xt−√αt|0x0√
βt|0
is the standard Gaussian noise in the zero CoM subspace injected to x0.
Proof. Let ẑt = (x̂t,ht), where x̂t = ϕ−1(xt), introduced in the proof of Proposition 4. Then {ẑt}Tt=0 is a process in R(M−1)n × RMd determined by the following SDE
dẑ = f(t)ẑdt+ g(t)dŵ, ẑ0 ∼ q̂(ẑ0), (8)
where ŵ is the standard Wiener process in R(M−1)n × RMd and q̂(ẑ0) = q(ϕ(x̂0),h). According to Song et al. (2020), Eq. (8) has a time reversal:
dẑ = [f(t)ẑ − g(t)2∇ẑ log q̂t(ẑ)]dt+ g(t)dw̃, ẑT ∼ q̂T (ẑT ), (9)
where q̂t(z) is the marginal distribution of ẑt, which satisfies q̂t(ẑt) = qt(ϕ(x̂t),ht) according to Proposition 2, and w̃ is the reverse-time standard Wiener process in R(M−1)n × RMd. Then we apply the linear transformation T ẑ = (ϕ(x̂),h) to Eq. (9), which maps ẑt back to zt. This yields
dz = [f(t)z − g(t)2T (∇ẑ log q̂t(ẑ))]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (10)
Here w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and T (∇ẑ log q̂t(ẑ)) can be expressed as (ϕ(∇x̂ log q̂t(ẑ)),∇h log q̂t(ẑ)). Since ∇x̂ log q̂t(ẑ) = A⊤ϕ∇x log qt(x,h) = A⊤ϕ∇x log qt(z), we have ϕ(∇x̂ log q̂t(ẑ)) = AϕA ⊤ ϕ∇x log qt(z), where Aϕ represents the matrix corresponding to ϕ. According to Proposition 1, we have ϕ(∇x̂ log q̂t(ẑ)) = ∇x log qt(z) − ∇x log qt(z). Besides, ∇h log q̂t(ẑ) = ∇h log qt(z). Thus, Eq. (10) can be written as
dz = [f(t)z − g(t)2(∇x log qt(z)−∇x log qt(z),∇h log qt(z))]dt+ g(t)d(w̃x, w̃h),
which is the score function form of the reverse-time SDE.
We can also write the score function in Eq. (9) as ∇ẑ log q̂t(ẑ) = Eq̂(ẑ0|ẑt)∇ẑ log q̂(ẑt|ẑ0) = − 1√
βt|0 Eq̂(ẑ0|ẑt)ϵ̂t, where ϵ̂t = ẑt−√αt|0ẑ0√ βt|0 is the standard Gaussian noise injected to ẑ0. With
this expression, we have T (∇ẑ log q̂t(ẑ)) = − 1√ βt|0 Eq̂(ẑ0|ẑt)T (ϵ̂t) = − 1√βt|0Eq(z0|zt)ϵt, where ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise in X ×RMd injected to z0. Thus, Eq. (10) can also
be written as
dz = [f(t)z + g(t)2√ βt|0 Eq(z0|zt)ϵt]dt+ g(t)d(w̃x, w̃h),
which is the noise prediction form of the reverse-time SDE.
A.3 EQUIVARIANCE
Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Let zθt = (xθt ,hθt ) (0 ≤ t ≤ T ) be the process determined by Eq. (5). Let yθt = Rx θ t and u θ t = (y θ t ,h θ t ). We use p u t (ut) and p z t (zt) to denote the distributions of uθt and z θ t respectively, and they satisfy p u t (yt,ht) = p z t (R −1yt,ht).
By applying the transformation Tz = (Rx,h) to Eq. (5), we know the new process {uθt }Tt=0 satisfies the following SDE:
du = Tdz = [f(t)u+ g(t)2√ βt|0 (Rϵxθ(z, t), ϵ h θ(z, t))]dt+ g(t)d(Rw̃x, w̃h)
=[f(t)u+ g(t)2√ βt|0 (ϵxθ(Rx,h, t), ϵ h θ(Rx,h, t))]dt+ g(t)d(Rw̃x, w̃h) // by equivariance =[f(t)u+ g(t)2√ βt|0 ϵθ(u, t)]dt+ g(t)d(w̃x, w̃h), // Winner process is isotropic
and its initial distribution is puT (uT ) = p u T (yT ,hT ) = p z T (R −1yT ,hT ) = pT (R −1yT ,hT ) = pT (yT ,hT ) = pT (uT ). Thus, the SDE of {uθt }Tt=0 is exactly the same to that of {zθt }Tt=0. This indicates that the distribution of uθ0 is the same to the distribution of z θ 0 , i.e., p u 0 (u0) = pz0 (u0) = pθ(u0). Also note that p u 0 (u0) = p z 0 (R −1y0,h0) = pθ(R −1y0,h0). Thus, pθ(u0) = pθ(R −1y0,h0), and consequently pθ(Rx0,h0) = pθ(x0,h0). This means pθ(z0) is invariant to any orthogonal transformation, which includes rotational transformations as special cases.
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Taking gradient to both side of E(Rx,h, c, t) = E(x,h, c, t) w.r.t. x, we get
R⊤∇yE(y,h, c, t)|y=Rx = ∇xE(x,h, c, t).
Multiplying R to both sides, we get
∇yE(y,h, c, t)|y=Rx = R∇xE(x,h, c, t).
Let ϕ(z, c, t) = (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)). Then we have
ϕ(Rx,h, c, t) =(∇yE(y,h, c, t)−∇yE(y,h, c, t),∇hE(y,h, c, t))|y=Rx =(R∇xE(x,h, c, t)−R∇xE(x,h, c, t),∇hE(Rx,h, c, t)) =(R(∇xE(x,h, c, t)−∇xE(x,h, c, t)),∇hE(x,h, c, t)).
Thus, ϕ(z, c, t) is equivariant to R. Let ϵ̂θ(z, c, t) = ϵθ(z, t) + √ βt|0ϕ(z, c, t), which is a linear combination of two equivariant functions and is also equivariant to R.
Then, Eq. (6) can be written as
dz = [f(t)z + g(t)2√ βt|0 ϵ̂θ(z, c, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
According to Theorem 1, we know its marginal distribution at time t = 0, i.e., pθ(z0|c), is invariant to any rotational transformation.
B SAMPLING
In Algorithm 1, we present the Euler-Maruyama method to sample from EEGSDE in Eq. 6.
Algorithm 1 Sample from EEGSDE using the Euler-Maruyama method Require: Number of steps N ∆t = TN z ← (x− x,h), where x ∼ N (0, 1), h ∼ N (0, 1) {Sample from the prior pT (zT )} for i = N to 1 do t← i∆t gx ← ∇xE(z, c, t), gh ← ∇hE(z, c, t) {Calculate the gradient of the energy function} g ← (gx − gx, gh) {Subtract the CoM of the gradient} F ← f(t)z + g(t)2( 1√
βt|0 ϵθ(z, t) + g)
ϵ← (ϵx − ϵx, ϵh), where ϵx ∼ N (0, 1), ϵh ∼ N (0, 1) z ← z − F∆t+ g(t) √ ∆tϵ {Update z according to Eq. (6)}
end for return z
C CONDITIONAL NOISE PREDICTION NETWORKS
In Eq. (6), we can alternatively use a conditional noise prediction network ϵθ(z, c, t) for a stronger guidance, as follows
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, c, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)))]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
The conditional noise prediction network is trained similarly to the unconditional one, using the following MSE loss
min θ
EtEq(c,z0,zt)w(t)∥ϵθ(zt, c, t)− ϵt∥ 2.
D PARAMETERIZATION OF NOISE PREDICTION NETWORKS
We parameterize the noise prediction network following Hoogeboom et al. (2022), and we provide the specific parameterization for completeness. For the unconditional model ϵθ(z, t), we first concatenate each atom feature hi and t, which gives hi′ = (hi, t). Then we input x and h′ = (h1′, . . . ,hM ′) to the EGNN as follows
(ax,ah′) = EGNN(x,h′)− (x,0).
Finally, we subtract the CoM of ax, and gets the parameterization of ϵθ(z, t):
ϵθ(z, t) = (a x − ax,ah),
where ah comes from discarding the last component of ah′ that corresponds to the time.
For the conditional model ϵθ(z, c, t), we additionally concatenate c to the atom feature hi, i.e., hi′ = (hi, t, c), and other parts in the parameterization remain the same.
E DETAILS OF ENERGY FUNCTIONS
E.1 PARAMETERIZATION OF TIME-DEPENDENT MODELS
The time-dependent property prediction model g(zt, t) is parameterized using the second component in the output of EGNN (see Section 3) followed by a decoder (Dec):
g(zt, t) = Dec(EGNN h(xt,h ′ t)), h ′ t = concatenate(ht, t), (11)
where the concatenation is performed on each atom feature, and the decoder is a small neural network based on Satorras et al. (2021b). This parameterization ensures that the energy function
E(zt, c, t) is invariant to orthogonal transformations, and thus the distribution of generated samples is also invariant according to Theorem 2.
Similarly, the time-dependent multi-label classifier is parameterized by EGNN as
m(zt, t) = σ(Dec(EGNN h(xt,h ′ t))), h ′ t = concatenate(ht, t).
The multi-label classifier has the same backbone to the property prediction model in Eq. (11), except that the decoder outputs a vector of dimension L, and the sigmoid function σ is adopted for multilabel classification. Similarly to Eq. (11), the EGNN in the multi-label classifier guarantees the invariance of the distribution of generated samples according to Theorem 2.
E.2 TRAINING OBJECTIVES OF ENERGY FUNCTIONS
Time-dependent property prediction model. Since the quantum property is a scalar, we train the time-dependent property prediction model g(zt, t) using the ℓ1 loss
EtEq(c,z0,zt)|g(zt, t)− c|, where t is uniformly sampled from [0, T ].
Time-dependent multi-label classifier. Since the fingerprint is a bit map, predicting it can be viewed as a multi-label classification task. Thus, we use a time dependent multi-label classifier m(zt, t), and train it using the binary cross entropy loss
EtEq(c,x0,xt) L∑ l=1 cl logml(zt, t) + (1− cl) log(1−ml(zt, t)),
where t is uniformly sampled from [0, T ], and ml(zt, t) is the l-th component of m(zt, t).
F EXPERIMENTAL DETAILS
F.1 HOW TO GENERATE MOLECULES TARGETED TO MULTIPLE QUANTUM PROPERTIES
When we want to generate molecules with K quantum properties c = (c1, c2, . . . , cK), we combine energy functions for single properties linearly as E(zt, c, t) = ∑K k=1Ek(zt, ck, t), where Ek(zt, ck, t) = sk|gk(zt, t) − ck|2 is the energy function for the k-th property, sk is the scaling factor and gk(zt, t) is the time-dependent property prediction model for the k-th property. Then we use the gradient of E(zt, c, t) to guide the reverse SDE as described in Eq. (6).
F.2 THE “U-BOUND” AND “#ATOMS” BASELINES
The “U-bound” and “#Atoms” baselines are from Hoogeboom et al. (2022). The “U-bound” baseline shuffles the labels in Db and then calculate the loss of ϕc on it, which can be regarded as an upper bound of the MAE. The “#Atoms” baseline predicts a quantum property c using only the number of atoms in a molecule.
F.3 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
For the noise prediction network, we use the same setting with EDM (Hoogeboom et al., 2022) for a fair comparison, where the models is trained ∼2000 epochs with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
The EGNN used in the energy function has 192 hidden features and 7 layers. We train 2000 epochs with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
During evaluation, we need to generate a set of molecules. Following the EDM (Hoogeboom et al., 2022), we firstly sample the number of atoms in a molecule M ∼ p(M) and the property value c ∼ p(c|M) (or c1 ∼ p(c1|M), c2 ∼ p(c2|M), . . . , cK ∼ p(cK |M) for multiple properties). Here p(M) is the distribution of molecule sizes on training data, and p(c|M) is the distribution of the property on training data. Then we generate a molecule given M, c.
F.4 GENERATING MOLECULES WITH TARGET STRUCTURES
Computation of Tanimoto similarity. Let Sg be the set of bits that are set to 1 in the fingerprint of a generated molecule, and St be the set of bits that are set to 1 in the target structure. The Tanimoto similarity is defined as |Sg ∩ St|/|Sg ∪ St|, where | · | denotes the number of elements in a set. Experimental details on QM9. For the backbone of the noise prediction network, we use a threelayer MLP with 768, 512 and 192 hidden nodes as embedding to encode the fingerprint and add the output of it with the embedding of atom features h, which is then fed into the following EGNN. The EGNN has 256 hidden features and 9 layers. We train it 1500 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The energy function is trained with 1750 epoch with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. Its EGNN has 192 hidden features and 7 layers. For the baseline cG-SchNet, we reproduce it using the public code. Since the default data split in cG-SchNet is different with ours, we train the cG-SchNet under the same data split with ours for a fair comparison. We report results at 200 epochs (there is no gain on similarity metric after 150 epochs). We evaluate the Tanimoto similarity on the whole test set.
Experimental details on GEOM-Drug. We use the same data split of GEOM-Drug with Hoogeboom et al. (2022), where training, validation and test set include 554K, 70K and 70K samples respectively. We train the noise prediction network and the energy function on the training set. For the noise prediction network, we use the recommended hyperparameters of EDM (Hoogeboom et al., 2022), where the EGNN has 256 hidden features and 4 layers, and the other part is the same as that in QM9. We train 10 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The backbone of the energy function is the same as that in QM9 and we train the energy function for 14 epoch. We evaluate the Tanimoto similarity on randomly selected 10K molecules from the test set.
G ADDITIONAL RESULTS
G.1 ABLATION STUDY ON NOISE PREDICTION NETWORKS AND ENERGY GUIDANCE
We perform an ablation study on the conditioning, i.e., use conditional or unconditional noise prediction networks, and the energy guidance. When neither the conditioning nor the energy guidance is adopted, a single unconditional model can’t perform conditional generation, and therefore we only report its atom stability and the molecule stability. As shown in Table 5, Table 6 and Table 7, the conditional noise prediction network improves the MAE compared to the unconditional one, and the energy guidance improves the MAE compared to a sole conditional model. Both the conditioning and the energy guidance do not affect the atom stability and the molecule stability much.
G.2 RESULTS ON NOVELTY, ATOM STABILITY AND MOLECULE STABILITY
For completeness, we also report novelty, atom stability and molecule stability on 10K generated molecules. Below we briefly introduce these metrics.
• Novelty (Simonovsky & Komodakis, 2018) is the proportion of generated molecules that do not appear in the training set. Specifically, let G be the set of generated molecules, the novelty is calculated as 1 − |G∩Db||G| . Note that the novelty is evaluated on Db, since the reproduced conditional EDM and our method are trained on Db. This leads to an inflated value compared to the one evaluated on the whole dataset (Hoogeboom et al., 2022).
• Atom stability (AS) (Hoogeboom et al., 2022) is the proportion of atoms that have the right valency. Molecule stability (MS) (Hoogeboom et al., 2022) is the proportion of generated molecules where all atoms are stable. We use the official implementation for the two metrics from the EDM paper (Hoogeboom et al., 2022).
As shown in Table 8 and Table 9, conditional EDM and EEGSDE with a small scaling factor have a slightly better stability, and EEGSDE with a large scaling factor has a slightly better novelty in general. The additional energy changes the distribution of generated molecules, which improves the novelty of generated molecules in general. Since there is a tradeoff between novelty and stability (see the caption of Table 5 in the EDM paper (Hoogeboom et al., 2022)), a slight decrease on the stability is possible when the scaling factor is large.
G.3 VISUALIZATION OF THE EFFECT OF THE SCALING FACTOR
We visualize the effect of the scaling factor in Figure 4, where the generated structures align better as the scaling factor grows.
G.4 REPRODUCE
We compare our reproduced results and the original results of conditional EDM (Hoogeboom et al., 2022) in Table 10. The results are consistent.
G.5 GENERATING MOLECULES WITH TARGET STRUCTURES ON GEOM-DRUG
We plot generated molecules on GEOM-Drug in Figure 5, and it can be observed that the atom types of generated molecules with EEGSDE often match the target better than conditional EDM.
G.6 THE DISTRIBUTIONS OF PROPERTIES
We plot the distribution of the properties of the training set, as well as the distribution of the properties of generated molecules (calculated by the Gaussian software). As shown in Figure 6, these distributions match well. | 1. What is the focus and contribution of the paper regarding conditional molecular generation?
2. What are the strengths and weaknesses of the proposed EEGSDE method, particularly in comparison to prior works like EDM?
3. How does the property prediction network differ from traditional energy models or potential energy guidance?
4. What are the limitations of using MAE as an evaluation metric, and how representative are the novelty scores?
5. How do the choice of metrics, such as atom stability and molecular stability, affect the assessment of the method's effectiveness?
6. Are there any concerns regarding the reproducibility of the results due to missing training details or other factors? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this work, the authors present an extension of equivariant Diffusion Models (as presented by Hoogeboom et al.) for conditional molecular generation. The method, named “Equivariant Energy-Guided SDE” (EEGSDE), introduces an additional property prediction network that acts as a guiding force in the generation process. The property prediction network is trained in parallel with the reverse diffusion process, and the network's gradient is taken to be the additional force.
The authors evaluate their conditional generation on multiple metrics—generation novelty, atom stability, molecule stability and the mean absolute error given by the trained property classifier. Under these metrics, the main improvement of the work is in terms of MAE and novelty, with the note that it is unclear how representative the novelty metrics are due to the training procedure of the prediction network.
Strengths And Weaknesses
With their proposed method the authors tackle the currently relevant and exciting problem of conditional molecular generation. The explicit focus on conditional generation is an important contribution to the molecular design space. The paper is very well structured and sufficiently places their contribution in the context of prior work on diffusion models. Although, this connection could be strengthened by making this connection explicit in the name of the method (eg. Equivariant Energy-Guided Diffusion Models).
However, there are a number of issues with the paper that I would like to see addressed:
Conceptually, how does the conditional generation using the prediction method differ from the conditional generation as done by Hoogeboom et al. Most importantly, what are the differences between conditional samples generated by the presented method and the EDM method?
I believe it would be interesting/needed to include a note as to why the loaded term “Energy Model” is used to refer to the property prediction model. Aside from the gradient of the property prediction network being used as a guiding force, I do not see any direct relation between the MSE loss and the molecular energy. Using the term “Energy” in this context might give the reader the impression some form of potential energy or free energy guidance is used.
It is unclear to me how representative the MAE as a metric for evaluation is. Am I correct in assuming that the trained property prediction model of the presented method is used to evaluate the others?
Related, it would be good if the authors could clarify how representative the current novelty scores are and which subsets of the dataset are used for calculating the novelty score for each model. As I understand it, only a subset of the data is used and as such the novelty score should be inflated compared to when using the entire dataset. Contrary to this expectation, the novelty score for the SDM method seems lower than presented in the original paper (Table 2 of Hoogeboom et al.)
Also related, I’m unsure about how representative atom stability and molecular stability are as a metric. Especially in the context of this work where the bonds have to be estimated heuristically.
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is well-structured and very readable Quality: The paper is of high quality in the sense of how the algorithms, tables and figures are organised and presented. Novelty: The authors primarily build their work on Equivariant Diffusion models. The introduction of the property prediction process as a guiding force seems however relatively novel. The extent to which this is an effective addition is however unclear at this moment in time. Reproducibility: Clear training details are missing in the main body of the paper. Code is however made available. |
ICLR | Title
Equivariant Energy-Guided SDE for Inverse Molecular Design
Abstract
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties. In this paper, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
1 INTRODUCTION
The discovery of new molecules with desired properties is critical in many fields, such as the drug and material design (Hajduk & Greer, 2007; Mandal et al., 2009; Kang et al., 2006; Pyzer-Knapp et al., 2015). However, brute-force search in the overwhelming molecular space is extremely challenging. Recently, inverse molecular design (Zunger, 2018) provides an efficient way to explore the molecular space, which directly predicts promising molecules that exhibit desired properties.
A natural way of inverse molecular design is to train a conditional generative model (SanchezLengeling & Aspuru-Guzik, 2018). Formally, it learns a distribution of molecules conditioned on certain properties from data, and new molecules are predicted by sampling from the distribution with the condition set to desired properties. Among them, equivariant diffusion models (EDM) (Hoogeboom et al., 2022) leverage the current state-of-art diffusion models (Ho et al., 2020), which involves a forward process to perturb data and a reverse process to generate 3D molecules conditionally or unconditionally. While EDM generates stable and valid 3D molecules, we argue that a single conditional generative model is insufficient for generating accurate molecules that exhibit desired properties (see Table 1 and Table 3 for an empirical verification).
In this work, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE formalizes the generation process as an equivariant stochastic differential equation, and plugs in energy functions to improve the controllability of generation. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations.
We apply EEGSDE to various applications by carefully designing task-specific energy functions. When targeted to quantum properties, EEGSDE is able to generate more accurate molecules than
∗Equal contribution. †Correspondence to: C. Li and J. Zhu.
EDM, e.g., reducing the mean absolute error by more than 30% on the dipole moment property. When targeted to specific molecular structures, EEGSDE better capture the structure information in molecules than EDM, e.g, improving the similarity to target structures by more than 10%. Furthermore, EEGSDE is able to generate molecules targeted to multiple properties by combining the corresponding energy functions linearly. These demonstrate that our EEGSDE enables a flexible and controllable generation of molecules, providing a smart way to explore the chemical space.
2 RELATED WORK
Diffusion models are initially proposed by Sohl-Dickstein et al. (2015). Recently, they are better understood in theory by connecting it to score matching and stochastic differential equations (SDE) (Ho et al., 2020; Song et al., 2020). After that, diffusion models have shown strong empirical performance in many applications Dhariwal & Nichol (2021); Ramesh et al. (2022); Chen et al. (2020); Kong et al. (2020). There are also variants proposed to improve or accelerate diffusion models (Nichol & Dhariwal, 2021; Vahdat et al., 2021; Dockhorn et al., 2021; Bao et al., 2022b;a; Salimans & Ho, 2022; Lu et al., 2022).
Guidance is a technique to control the generation process of diffusion models. Initially, Song et al. (2020); Dhariwal & Nichol (2021) use classifier guidance to generate samples belonging to a class. Then, the guidance is extended to CLIP (Radford et al., 2021) for text to image generation, and semantic-aware energy (Zhao et al., 2022) for image-to-image translation. Prior guidance methods focus on image data, and are nontrivial to apply to molecules, since they do not consider the geometric symmetry. In contrast, our work proposes a general guidance framework for 3D molecules, where an invariant energy function is employed to leverage the geometric symmetry of molecules.
Molecule generation. Several works attempt to model molecules as 3D objects via deep generative models Nesterov et al. (2020); Gebauer et al. (2019); Satorras et al. (2021a); Hoffmann & Noé (2019); Hoogeboom et al. (2022). Among them, the most relevant one is the equivariant diffusion model (EDM) (Hoogeboom et al., 2022), which generates molecules in an iterative denoising manner. Benefiting from recent advances of diffusion models, EDM is stable to train and is able to generate high quality molecules. We provide a formal description of EDM in Section 3. Some other methods generate simplified representations of molecules, such as 1D SMILES strings (Weininger, 1988) and 2D graphs of molecules. These include variational autoencoders (Kusner et al., 2017; Dai et al., 2018; Jin et al., 2018; Simonovsky & Komodakis, 2018; Liu et al., 2018), normalizing flows (Madhawa et al., 2019; Zang & Wang, 2020; Luo et al., 2021), generative adversarial networks (Bian et al., 2019; Assouel et al., 2018), and autoregressive models (Popova et al., 2019; Flam-Shepherd et al., 2021). There are also methods on generating torsion angles in molecules. For
instance, Torsional Diffusion (Jing et al., 2022) employs the SDE formulation of diffusion models to model torsion angles in a given 2D molecular graph for the conformation generation task.
Inverse molecular design. Generative models have been applied to inverse molecular design. For example, conditional autoregressive models (Gebauer et al., 2022) and EDM (Hoogeboom et al., 2022) directly generate 3D molecules with desired quantum properties. Gebauer et al. (2019) also finetune pretrained generative models on a biased subset to generate 3D molecules with small HOMO-LUMO gaps. In contrast to these conditional generative models, our work further proposes a guidance method, a flexible way to control the generation process of molecules. Some other methods apply optimization methods to search molecules with desired properties, such as reinforcement learning (Zhou et al., 2019; You et al., 2018) and genetic algorithms (Jensen, 2019; Nigam et al., 2019). These optimization methods generally consider the 1D SMILES strings or 2D graphs of molecules, and the 3D information is not provided.
3 BACKGROUND
3D representation of molecules. Suppose a molecule has M atoms and let xi ∈ Rn (n = 3 in general) be the coordinate of the ith atom. The collection of coordinates x = (x1, . . . ,xM ) ∈ RMn determines the conformation of the molecule. In addition to the coordinate, each atom is also associated with an atom feature, e.g., the atom type. We use hi ∈ Rd to represent the atom feature of the ith atom, and use h = (h1, . . . ,hM ) ∈ RMd to represent the collection of atom features in a molecule. We use a tuple z = (x,h) to represent a molecule, which contains both the 3D geometry information and the atom feature information.
Equivariance and invariance. Suppose R is a transformation. A distribution p(x,h) is said to be invariant to R, if p(x,h) = p(Rx,h) holds for all x and h. Here Rx = (Rx1, . . . ,RxM ) is applied to each coordinate. A function (ax,ah) = f(x,h) that have two components ax,ah in its output is said to be equivariant to R, if f(Rx,h) = (Rax,ah) holds for all x and h. A function f(x,h) is said to be invariant to R, if f(Rx,h) = f(x,h) holds for all x and h.
Zero CoM subspace. It has been shown that the invariance to translational and rotational transformations is an important factor for the success of 3D molecule modeling (Köhler et al., 2020; Xu et al., 2022). However, the translational invariance is impossible for a distribution in the full space RMn (Satorras et al., 2021a). Nevertheless, we can view two collections of coordinates x and y as equivalent if x can be translated from y, since the translation doesn’t change the identity of a molecule. Such an equivalence relation partitions the whole space RMn into disjoint equivalence classes. Indeed, all elements in the same equivalence classes represent the same conformation, and we can use the element with zero center of mass (CoM), i.e., 1M ∑M i=1 x
i = 0, as the specific representation. These elements collectively form the zero CoM linear subspace X (Xu et al., 2022; Hoogeboom et al., 2022), and the rest of the paper always uses elements in X to represent conformations.
Equivariant graph neural network. Satorras et al. (2021b) propose equivariant graph neural networks (EGNNs), which incorporate the equivariance inductive bias into neural networks. Specifically, (ax,ah) = EGNN(x,h) is a composition of L equivariant convolutional layers. The l-th layer takes the tuple (xl,hl) as the input and outputs an updated version (xl+1,hl+1), as follows:
mij = Φm(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θm), w ij = Φw(m ij ;θw), h i l+1 = Φh(h i l, ∑ j ̸=i wijmij ;θh),
xil+1 = x i l + ∑ j ̸=i
xil − x j l
∥xil − x j l ∥2 + 1
Φx(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θx),
where Φm,Φw,Φh,Φx are parameterized by fully connected neural networks with parameters θm,θw,θh,θx respectively, and eij are optional feature attributes. We can verify that these layers are equivariant to orthogonal transformations, which include rotational transformations as special cases. As their composition, the EGNN is also equivariant to orthogonal transformations. Furthermore, let EGNNh(x,h) = ah, i.e., the second component in the output of the EGNN. Then EGNNh(x,h) is invariant to orthogonal transformations.
Equivariant diffusion models (EDM) (Hoogeboom et al., 2022) are a variant of diffusion models for molecule data. EDMs gradually inject noise to the molecule z = (x,h) via a forward process
q(z1:N |z0)= N∏ n=1 q(zn|zn−1), q(zn|zn−1) = NX(xn| √ αnxn−1, βn)N (hn| √ αnhn−1, βn), (1)
where αn and βn represent the noise schedule and satisfy αn + βn = 1, and NX represent the Gaussian distribution in the zero CoM subspace X (see its formal definition in Appendix A.2). Let αn = α1α2 · · ·αn, βn = 1− αn and β̃n = βnβn−1/βn. To generate samples, the forward process is reversed using a Markov chain:
p(z0:N )=p(zN ) N∏ n=1 p(zn−1|zn), p(zn−1|zn)=NX(xn−1|µxn(zn), β̃n)N(hn−1|µhn(zn), β̃n). (2)
Here p(zN ) = NX(xN |0, 1)N (hN |0, 1). The mean µn(zn) = (µxn(zn),µhn(zn)) is parameterized by a noise prediction network ϵθ(zn, n), and is trained using a MSE loss, as follows:
µn(zn) = 1√ αn (zn − βn√ βn ϵθ(zn, n)), min θ EnEq(z0,zn)w(t)∥ϵθ(zn, n)− ϵn∥ 2,
where ϵn = zn− √ αnz0√ βn is the standard Gaussian noise injected to z0 and w(t) is the weight term. Hoogeboom et al. (2022) show that the distribution of generated samples p(z0) is invariant to rotational transformations if the noise prediction network is equivariant to orthogonal transformations. In Section 4.2, we extend this proposition to the SDE formulation of molecular diffusion modelling.
Hoogeboom et al. (2022) also present a conditional version of EDM for inverse molecular design by adding an extra input of the condition c to the noise prediction network as ϵθ(zn, c, n).
4 EQUIVARIANT ENERGY-GUIDED SDE
In this part, we introduce our equivariant energy-guided SDE (EEGSDE), as illustrated in Figure 1. EEGSDE is based on the SDE formulation of molecular diffusion modeling, which is described in Section 4.1 and Section 4.2. Then, we formally present our EEGSDE that incorporates an energy function to guide the molecular generation in Section 4.3. We provide derivations in Appendix A.
4.1 SDE IN THE PRODUCT SPACE
Recall that a molecule is represented as a tuple z = (x,h), where x = (x1, . . . ,xM ) ∈ X represents the conformation and h = (h1, . . . ,hM ) ∈ RMd represents atom features. Here X = {x ∈ RMn : 1M ∑M i=1 x
i = 0} is the zero CoM subspace mentioned in Section 3, and d is the feature dimension. We first introduce a continuous-time diffusion process {zt}0≤t≤T in the product spaceX×RMd, which gradually adds noise to x and h. This can be described by the forward SDE:
dz = f(t)zdt+ g(t)d(wx,wh), z0 ∼ q(z0), (3)
where f(t) and g(t) are two scalar functions, wx and wh are independent standard Wiener processes in X and RMd respectively, and the SDE starts from the data distribution q(z0). Note that wx can be constructed by subtracting the CoM of a standard Wiener process w in RMn, i.e., wx = w−w, where w = 1M ∑M i=1 w
i is the CoM of w = (w1, . . . ,wM ). It can be shown that the SDE has a linear Gaussian transition kernel q(zt|zs) = q(xt|xs)q(ht|hs) from xs to xt, where 0 ≤ s < t ≤ T . Specifically, there exists two scalars αt|s and βt|s, s.t., q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here NX denotes the Gaussian distribution in the subspace X , and see Appendix A.2 for its formal definition. Indeed, the forward process of EDM in Eq. (1) is a discretization of the forward SDE in Eq. (3).
To generate molecules, we reverse Eq. (3) from T to 0. Such a time reversal forms another a SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (4)
Here qt(z) is the marginal distribution of zt, ∇x log qt(z) is the gradient of log qt(z) w.r.t. x1, ∇x log qt(z) = 1M ∑M i=1∇xi log qt(z) is the CoM of ∇x log qt(z), dt is the infinitesimal negative timestep, w̃x and w̃h are independent reverse-time standard Wiener processes in X and RMd respectively, and ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise injected to z0. Compared to the
original SDE introduced by Song et al. (2020), our reverse SDE in Eq. (4) additionally subtracts the CoM of∇x log qt(z). This ensures xt always stays in the zero CoM subspace as time flows back. To sample from the reverse SDE in Eq. (4), we use a noise prediction network ϵθ(zt, t) to estimate Eq(z0|zt)ϵt, through minimizing the MSE loss minθ EtEq(z0,zt)w(t)∥ϵθ(zt, t) − ϵt∥2, where t is uniformly sampled from [0, T ], and w(t) controls the weight of the loss term at time t. Note that the noise ϵt is in the product space X × RMd, so we subtract the CoM of the predicted noise of xt to ensure ϵθ(zt, t) is also in the product space.
Substituting ϵθ(zt, t) into Eq. (4), we get an approximate reverse-time SDE parameterized by θ:
dz = [f(t)z + g(t)2√ βt|0 ϵθ(z, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (5)
where pT (zT ) = NX(xT |0, 1)N (hT |0, 1) is a Gaussian prior in the product space that approximates qT (zT ). We define pθ(z0) as the marginal distribution of Eq. (5) at time t = 0, which is the distribution of our generated samples. Similarly to the forward process, the reverse process of EDM in Eq. (2) is a discretization of the reverse SDE in Eq. (5).
4.2 EQUIVARIANT SDE
To leverage the geometric symmetry in 3D molecular conformation, pθ(z0) should be invariant to translational and rotational transformations. As mentioned in Section 3, the translational invariance of pθ(z0) is already satisfied by considering the zero CoM subspace. The rotational invariance can be satisfied if the noise prediction network is equivariant to orthogonal transformations, as summarized in the following theorem: Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
As mentioned in Section 3, the EGNN satisfies the equivariance constraint, and we parameterize ϵθ(zt, t) using an EGNN following Hoogeboom et al. (2022). See details in Appendix D.
4.3 EQUIVARIANT ENERGY-GUIDED SDE
Now we describe equivariant energy-guided SDE (EEGSDE), which guides the generated molecules of Eq. (5) towards desired properties c by leveraging a time-dependent energy function E(z, c, t):
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t))︸ ︷︷ ︸ energy gradient taken in the product space )]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (6)
1While qt(z) is defined in X ×RMd, its domain can be extended to RMn ×RMd and the gradient is valid. See Remark 1 in Appendix A.2 for details.
which defines a distribution pθ(z0|c) conditioned on the property c. Here the CoM ∇xE(z, c, t) of the gradient is subtracted to keep the SDE in the product space, which ensures the translational invariance of pθ(z0|c). Besides, the rotational invariance is satisfied by using energy invariant to orthogonal transformations, as summarized in the following theorem:
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Note that we can also use a conditional model ϵθ(z, c, t) in Eq. (6). See Appendix C for details. To sample from pθ(z0|c), various solvers can be used for Eq. (6), such as the Euler-Maruyama method (Song et al., 2020) and the Analytic-DPM sampler (Bao et al., 2022b;a). We present the Euler-Maruyama method as an example in Algorithm 1 at Appendix B.
4.4 HOW TO DESIGN THE ENERGY FUNCTION
Our EEGSDE is a general framework, which can be applied to various applications by specifying different energy functions E(z, c, t). For example, we can design the energy function according to consistency between the molecule z and the property c, where a low energy represents a well consistency. As the generation process in Eq. (6) proceeds, the gradient of the energy function encourages generated molecules to have a low energy, and consequently a well consistency. Thus, we can expect the generated molecule z aligns well with the property c. In the rest of the paper, we specify the choice of energy functions, and show these energies improve controllable molecule generation targeted to quantum properties, molecular structures, and even a combination of them.
Remark. The term “energy” in this paper refers to a general notion in statistical machine learning, which is a scalar function that captures dependencies between input variables (LeCun et al., 2006). Thus, the “energy” in this paper can be set to a MSE loss when we want to capture how the molecule align with the property (as done in Section 5). Also, the “energy” in this paper does not exclude potential energy or free energy in chemistry, and they might be applicable when we want to generate molecules with small potential energy or free energy.
5 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
Let c ∈ R be a certain quantum property. To generate molecules with the desired property, we set the energy function as the squared error between the predicted property and the desired property E(zt, c, t) = s|g(zt, t)− c|2, where g(zt, t) is a time-dependent property prediction model, and s is the scaling factor controlling the strength of the guidance. Specifically, g(zt, t) can be parameterized by equivariant models such as EGNN (Satorras et al., 2021b), SE3-Transformer (Fuchs et al., 2020) and DimeNet (Klicpera et al., 2020) to ensure the invariance of E(zt, c, t), as long as they perform well in the task of property prediction. In this paper we consider EGNN. We provide details on parameterization and the training objective in Appendix E. We can also generate molecules targeted to multiple quantum properties by combining energy functions linearly (see details in Appendix F.1).
5.1 SETUP
We evaluate on QM9 (Ramakrishnan et al., 2014), which contains quantum properties and coordinates of ∼130k molecules with up to nine heavy atoms from (C, N, O, F). Following EDM, we split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively. The training set is further divided into two non-overlapping halves Da, Db equally. The noise prediction network and the time-dependent property prediction model of the energy function are trained on Db separately. By default, EEGSDE uses a conditional noise prediction network ϵθ(z, c, t) in Eq. (6), since we find it generate more accurate molecules than using an unconditional one (see Appendix G.1 for an ablation study). See more details in Appendix F.3.
Evaluation metric: Following EDM, we use the mean absolute error (MAE) to evaluate how generated molecules align with the desired property. Specifically, we train another property prediction model ϕp (Satorras et al., 2021b) onDa, and the MAE is calculated as 1K ∑K i=1 |ϕp(zi)−ci|, where zi is a generated molecule, ci is its desired property. We generate K=10,000 samples for evaluation with ϕp. For fairness, the property prediction model ϕp is different from g(zt, t) used in the energy
function, and they are trained on the two non-overlapping training subsetsDa, Db respectively. This ensures no information leak occurs when evaluating our EEGSDE. To further verify the effectiveness of EEGSDE, we also calculate the MAE without relying on a neural network. Specifically, we use the Gaussian software (which calculates properties according to theories of quantum chemistry without using neural networks) to calculate the properties of 100 generated molecules. For completeness, we also report the novelty, the atom stability and the molecule stability following Hoogeboom et al. (2022) in Appendix G.2, although they are not our main focus.
Baseline: The most direct baseline is conditional EDM, which only adopts a conditional noise prediction network. We also compare two additional baselines “U-bound” and “#Atoms” from Hoogeboom et al. (2022) (see Appendix F.2 for details).
5.2 RESULTS
Following Hoogeboom et al. (2022), we consider six quantum properties in QM9: polarizability α, highest occupied molecular orbital energy εHOMO, lowest unoccupied molecular orbital energy εLUMO, HOMO-LUMO gap ∆ε, dipole moment µ and heat capacity Cv . Firstly, we generate
molecules targeted to one of these six properties. As shown in Table 1, with the energy guidance, our EEGSDE has a significantly better MAE than the conditional EDM on all properties. Remarkably, with a proper scaling factor s, the MAE of EEGSDE is reduced by more than 25% compared to conditional EDM on properties ∆ε, εLUMO, and more than 30% on µ. What’s more, as shown in Table 2, our EEGSDE still has better MAE under the evaluation by the Gaussian software, which further verifies the effectiveness of our EEGSDE. We further generate molecules targeted to multiple quantum properties by combining energy functions linearly. As shown in Table 3, our EEGSDE still has a significantly better MAE than the conditional EDM.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the desired properties than molecules generated by the conditional EDM baseline (for both the singleproperty and multiple-properties cases). As a consequence, EEGSDE is able to explore the chemical space in a guided way to generate promising molecules for downstream applications such as the virtual screening, which may benefit drug and material discovery.
6 GENERATING MOLECULES WITH TARGET STRUCTURES
Following Gebauer et al. (2022), we use the molecular fingerprint to encode the structure information of a molecule. The molecular fingerprint c = (c1, . . . , cL) is a series of bits that capture the presence or absence of substructures in the molecule. Specifically, a substructure is mapped to a specific position l in the bitmap, and the corresponding bit cl will be 1 if the substructure exists in the molecule and will be 0 otherwise. To generate molecules with a specific structure (encoded by the fingerprint c), we set the energy function as the squared errorE(zt, c, t) = s∥m(zt, t)−c∥2 between a time-dependent multi-label classifier m(zt, t) and c. Here s is the scaling factor, and m(zt, t) is trained with binary cross entropy loss to predict the fingerprint as detailed in Appendix E.2. Note that the choice of the energy function is flexible and can be different to the training loss of m(zt, t). In initial experiments, we also try binary cross entropy loss for the energy function, but we find it causes the generation process unstable. The multi-label classifierm(zt, t) is parameterized in a similar way to the property prediction model g(zt, t) in Section 5, and we present details in Appendix E.1.
6.1 SETUP
We evaluate on QM9 and GEOM-Drug. We train our method (including the noise prediction network and the multi-label classifier) on the whole training set. By default, we use a conditional noise prediction network ϵθ(z, c, t) in Eq. (6) for a better performance. See more details in Appendix F.4.
Evaluation metric: To measure how the structure of a generated molecule aligns with the target one, we use the Tanimoto similarity (Gebauer et al., 2022), which captures similarity between structures by comparing their fingerprints.
Baseline: The most direct baseline is conditional EDM (Hoogeboom et al., 2022), which only adopts a conditional noise prediction network without the guidance of an energy model. We also consider cG-SchNet (Gebauer et al., 2022), which generates molecules in an autoregressive manner.
6.2 RESULTS
As shown in Table 4, EEGSDE significantly improves the similarity between target structures and generated structures compared to conditional EDM and cG-SchNet on QM9. Also note in a proper range, a larger scaling factor results in a better similarity, and EEGSDE with s=1 improves the sim-
ilarity by more than 10% compared to conditional EDM. In Figure 2, we plot generated molecules of conditional EDM and EEGSDE (s=1) targeted to specific structures, where our EEGSDE aligns better with them. We further visualize the effect of the scaling factor in Appendix G.3, where the generated structures align better as the scaling factor grows. These results demonstrate that our EEGSDE captures the structure information in molecules well.
We also perform experiments on the more challenging GEOM-Drug (Axelrod & Gomez-Bombarelli, 2022) dataset, and we train the conditional EDM baseline following the default setting of Hoogeboom et al. (2022). As shown in Table 4, we find the conditional EDM baseline has a similarity of 0.165, which is much lower than the value on QM9. We hypothesize this is because molecules in GEOM-Drug has much more atoms than QM9 with a more complex structure, and the default setting in Hoogeboom et al. (2022) is suboptimal. For example, the conditional EDM on GEOM-Drug has a smaller number of parameters than the conditional EDM on QM9 (15M v.s. 26M), which is insufficient to capture the structure information. Nevertheless, our EEGSDE still improves the similarity by ∼%17. We provide generated molecules on GEOM-Drug in Appendix G.5. Finally, we demonstrate that our EEGSDE is a flexible framework to generate molecules targeted to multiple properties, which is often the practical case. We additionally target to the quantum property α (polarizability) on QM9 by combining the energy function for structures in this section and the energy function for quantum properties in Section 5. Here we choose α = 100 Bohr3, which is a relatively large value, and we expect it to encourage less isometrically shaped structures. As shown in Figure 3, the generated molecule aligns better with the target structure as the scaling factor s1 grows, and meanwhile a ring substructure in the generated molecule vanishes as the scaling factor for polarizability s2 grows, leading to a less isometrically shaped structure, which is as expected.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the target structure than molecules generated by the conditional EDM baseline. Besides, EEGSDE can generate molecules targeted to both specific structures and desired quantum properties by combining energy functions linearly. As a result, EEGSDE may benefit practical cases in molecular design when multiple properties should be considered at the same time.
7 CONCLUSION
This work presents equivariant energy-guided SDE (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. EEGSDE significantly improves the conditional EDM baseline in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
ACKNOWLEDGMENTS
We thank Han Guo and Zhen Jia for their help with their expertise in chemistry. This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098; a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13). J.Z was also supported by the XPlorer Prize. C. Li was also sponsored by Beijing Nova Program.
ETHICS STATEMENT
Inverse molecular design is critical in fields like material science and drug discovery. Our EEGSDE is a flexible framework for inverse molecular design and thus might benefit these fields. Currently the negative consequences are not obvious.
REPRODUCIBILITY STATEMENT
Our code is included in the supplementary material. The implementation of our experiment is described in Section 5 and Section 6. Further details such as the hyperparameters of training and model backbones are provided in Appendix F. We provide complete proofs and derivations of all theoretical results in Appendix A.
A DERIVATIONS
A.1 ZERO COM SUBSPACE
The zero CoM subspace X = {x ∈ RMn : ∑M i=1 x
i = 0} is a (M − 1)n dimensional subspace of RMn. Therefore, there exists an isometric isomorphism ϕ from R(M−1)n to X , i.e., ϕ is a linear bijection from R(M−1)n to X , and ∥ϕ(x̂)∥2 = ∥x̂∥2 for all x̂ ∈ R(M−1)n. We use Aϕ ∈ RMn×(M−1)n represent the matrix corresponding to ϕ, so we have ϕ(x̂) = Aϕx̂. An important property of the isometric isomorphism is that AϕA⊤ϕx = x − x for all x ∈ RMn. We show the proof as following. Proposition 1. Suppose ϕ is an isometric isomorphism from R(M−1)n to X , and let Aϕ ∈ RMn×(M−1)n be the matrix corresponding to ϕ. Then we have AϕA⊤ϕx = x−x for all x ∈ RMn, where x = 1M ∑M i=1 x i.
Proof. We consider a new subspace of RMn, X⊥ = {x ∈ RMn : x ⊥ X}, i.e., the orthogonal component of X . We can verify that X⊥ = {x ∈ RMn : x1 = x2 = · · · = xM}, and X⊥ is n dimensional. Thus, there exists an isometric isomorphism ψ from Rn toX⊥. LetAψ ∈ RMn×n represent the matrix corresponding to ψ. Then we define λ(x̂, ŷ) = ϕ(x̂)+ψ(ŷ), where x̂ ∈ R(M−1)n and ŷ ∈ Rn. The image of λ is {x + y : x ∈ X,y ∈ X⊥} = RMn. Therefore, λ is a linear bijection from RMn to RMn, and the matrix corresponding to λ is Aλ = [Aϕ, Aψ]. Furthermore, λ is an isometric isomorphism, since ∥λ(x̂, ŷ)∥2 = ∥ϕ(x̂)∥2 + ∥ψ(ŷ)∥2 + 2 ⟨ϕ(x̂), ψ(ŷ)⟩ = ∥x̂∥2 + ∥ŷ∥2 = ∥(x̂⊤, ŷ⊤)⊤∥2. This means Aλ is orthogonal transformation. Therefore, AλA ⊤ λ = AϕA ⊤ ϕx + AψA ⊤ ψ = I and AϕA ⊤ ϕx + AψA ⊤ ψx = x. Since AϕA ⊤ ϕx ∈ X and AψA ⊤ ψx ∈ X⊤, we can conclude that AϕA⊤ϕx is the orthogonal projection of x to X , which is exactly x− x.
Since R(M−1)n and X are two intrinsically equivalent spaces, an equivalence of distributions in these two spaces can also be established, as shown in the following propositions. Proposition 2. Suppose x is a random vector distributed in X , and x̂ = ϕ−1(x) is its equivalent representation in R(M−1)n. If x ∼ q(x), then x̂ ∼ q̂(x̂), where q̂(x̂) = q(ϕ(x̂)). Proposition 3. Suppose x,y is are two random vectors distributed in X , and x̂ = ϕ−1(x), ŷ = ϕ−1(y) are their equivalent representations in R(M−1)n. If x|y ∼ q(x|y), then x̂|ŷ ∼ q̂(x̂|ŷ), where q̂(x̂|ŷ) = q(ϕ(x̂)|ϕ(ŷ))).
A.2 SDE IN THE PRODUCT SPACE
Definition 1 (Gaussian distributions in the zero CoM subspace). Suppose µ ∈ X . Let NX(x|µ, σ2) := (2πσ2)−(M−1)n/2 exp(− 12σ2 ∥x−µ∥ 2 2), which is the isotropic Gaussian distribution with mean µ and variance σ2 in the zero CoM subspace X .
Proposition 4 (Transition kernels of a SDE in the zero CoM subspace). Suppose dx = f(t)xdt+ g(t)dwx, x0 ∼ q(x0) is a SDE in the zero CoM subspace. Then the transition kernel from xs to xt (0 ≤ s < t ≤ T ) can be expressed as q(xt|xs) = NX(xt| √ αt|sxs, βt|s), where αt|s =
exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·).
Proof. Firstly, we map the process {xt}Tt=0 in the zero CoM subspace to the equivalent space R(M−1)n through the isometric isomorphism ϕ introduced in Appendix A.1. This produces a new process {x̂}Tt=0, where x̂t = ϕ−1(xt). By applying ϕ−1 to the SDE in the zero CoM subspace, we know x̂t = ϕ−1(xt) satisfies the following SDE in R(M−1)n:
dx̂ = f(t)x̂dt+ g(t)dŵ, x̂0 ∼ q̂(x̂0), (7) where ŵ is the standard Wiener process in R(M−1)n and q̂(x̂0) = q(ϕ(x̂0)). According to Song et al. (2020), the transition of Eq. (7) from x̂s to x̂t (s < t) can be expressed as q̂(x̂t|x̂s) = N (x̂t| √ αt|sx̂s, βt|sI), where αt|s = exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·). According to Proposition 3, we know the transition kernel q(xt|xs) from xs to xt satisfies
q(xt|xs) =q̂(ϕ−1(xt)|ϕ−1(xs)) = (2πβt|s)−(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt)−
√ αt|sϕ −1(xs)∥22)
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt −
√ αt|sxs)∥22) // linearity of ϕ−1
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥xt −
√ αt|sxs∥22) // norm preserving of ϕ−1
=NX(xt| √ αt|sxs, βt|s).
Proposition 5 (Transition kernels of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then the transition kernel from zs to zt (0 ≤ s < t ≤ T ) can be expressed as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here αt|s and βt|s are defined as in Proposition 4.
Proof. Since wx and wh are independent to each other, the transition kernel from zs to zt can be factorized as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) is the transition kernel of dx = f(t)xdt+ g(t)dwx and q(ht|hs) is the transition kernel of dh = f(t)hdt+ g(t)dwh. According to Proposition 4 and Song et al. (2020), we have q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s).
Remark 1. The marginal distribution of zt is
qt(zt) = ∫ Z ∫ RMd q(z0)q(xt|x0)q(ht|h0)dh0λ(dx0),
where λ is the Lebesgue measure in the zero CoM subspace X . While qt(zt) is a distribution in X × RMd, it has a natural differentiable extension to RMn × RMd, since q(xt|x0) = NX(xt| √ αt|sxs, βt|s) has a differentiable extension to RMn according to Definition 1. Thus, we can take gradient of qt(zt) w.r.t. xt in the whole space RMn. Proposition 6 (Time reversal of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then its time reversal satisfies the following reverse-time SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ),
where w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and ϵt =
zt−√αt|0z0√ βt|0 is the standard Gaussian noise in X × RMd injected to z0.
Furthermore, we have∇x log qt(x)−∇x log qt(x) = − 1√ βt|0
Eq(x0|xt)ϵt, where ϵt = xt−√αt|0x0√
βt|0
is the standard Gaussian noise in the zero CoM subspace injected to x0.
Proof. Let ẑt = (x̂t,ht), where x̂t = ϕ−1(xt), introduced in the proof of Proposition 4. Then {ẑt}Tt=0 is a process in R(M−1)n × RMd determined by the following SDE
dẑ = f(t)ẑdt+ g(t)dŵ, ẑ0 ∼ q̂(ẑ0), (8)
where ŵ is the standard Wiener process in R(M−1)n × RMd and q̂(ẑ0) = q(ϕ(x̂0),h). According to Song et al. (2020), Eq. (8) has a time reversal:
dẑ = [f(t)ẑ − g(t)2∇ẑ log q̂t(ẑ)]dt+ g(t)dw̃, ẑT ∼ q̂T (ẑT ), (9)
where q̂t(z) is the marginal distribution of ẑt, which satisfies q̂t(ẑt) = qt(ϕ(x̂t),ht) according to Proposition 2, and w̃ is the reverse-time standard Wiener process in R(M−1)n × RMd. Then we apply the linear transformation T ẑ = (ϕ(x̂),h) to Eq. (9), which maps ẑt back to zt. This yields
dz = [f(t)z − g(t)2T (∇ẑ log q̂t(ẑ))]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (10)
Here w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and T (∇ẑ log q̂t(ẑ)) can be expressed as (ϕ(∇x̂ log q̂t(ẑ)),∇h log q̂t(ẑ)). Since ∇x̂ log q̂t(ẑ) = A⊤ϕ∇x log qt(x,h) = A⊤ϕ∇x log qt(z), we have ϕ(∇x̂ log q̂t(ẑ)) = AϕA ⊤ ϕ∇x log qt(z), where Aϕ represents the matrix corresponding to ϕ. According to Proposition 1, we have ϕ(∇x̂ log q̂t(ẑ)) = ∇x log qt(z) − ∇x log qt(z). Besides, ∇h log q̂t(ẑ) = ∇h log qt(z). Thus, Eq. (10) can be written as
dz = [f(t)z − g(t)2(∇x log qt(z)−∇x log qt(z),∇h log qt(z))]dt+ g(t)d(w̃x, w̃h),
which is the score function form of the reverse-time SDE.
We can also write the score function in Eq. (9) as ∇ẑ log q̂t(ẑ) = Eq̂(ẑ0|ẑt)∇ẑ log q̂(ẑt|ẑ0) = − 1√
βt|0 Eq̂(ẑ0|ẑt)ϵ̂t, where ϵ̂t = ẑt−√αt|0ẑ0√ βt|0 is the standard Gaussian noise injected to ẑ0. With
this expression, we have T (∇ẑ log q̂t(ẑ)) = − 1√ βt|0 Eq̂(ẑ0|ẑt)T (ϵ̂t) = − 1√βt|0Eq(z0|zt)ϵt, where ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise in X ×RMd injected to z0. Thus, Eq. (10) can also
be written as
dz = [f(t)z + g(t)2√ βt|0 Eq(z0|zt)ϵt]dt+ g(t)d(w̃x, w̃h),
which is the noise prediction form of the reverse-time SDE.
A.3 EQUIVARIANCE
Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Let zθt = (xθt ,hθt ) (0 ≤ t ≤ T ) be the process determined by Eq. (5). Let yθt = Rx θ t and u θ t = (y θ t ,h θ t ). We use p u t (ut) and p z t (zt) to denote the distributions of uθt and z θ t respectively, and they satisfy p u t (yt,ht) = p z t (R −1yt,ht).
By applying the transformation Tz = (Rx,h) to Eq. (5), we know the new process {uθt }Tt=0 satisfies the following SDE:
du = Tdz = [f(t)u+ g(t)2√ βt|0 (Rϵxθ(z, t), ϵ h θ(z, t))]dt+ g(t)d(Rw̃x, w̃h)
=[f(t)u+ g(t)2√ βt|0 (ϵxθ(Rx,h, t), ϵ h θ(Rx,h, t))]dt+ g(t)d(Rw̃x, w̃h) // by equivariance =[f(t)u+ g(t)2√ βt|0 ϵθ(u, t)]dt+ g(t)d(w̃x, w̃h), // Winner process is isotropic
and its initial distribution is puT (uT ) = p u T (yT ,hT ) = p z T (R −1yT ,hT ) = pT (R −1yT ,hT ) = pT (yT ,hT ) = pT (uT ). Thus, the SDE of {uθt }Tt=0 is exactly the same to that of {zθt }Tt=0. This indicates that the distribution of uθ0 is the same to the distribution of z θ 0 , i.e., p u 0 (u0) = pz0 (u0) = pθ(u0). Also note that p u 0 (u0) = p z 0 (R −1y0,h0) = pθ(R −1y0,h0). Thus, pθ(u0) = pθ(R −1y0,h0), and consequently pθ(Rx0,h0) = pθ(x0,h0). This means pθ(z0) is invariant to any orthogonal transformation, which includes rotational transformations as special cases.
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Taking gradient to both side of E(Rx,h, c, t) = E(x,h, c, t) w.r.t. x, we get
R⊤∇yE(y,h, c, t)|y=Rx = ∇xE(x,h, c, t).
Multiplying R to both sides, we get
∇yE(y,h, c, t)|y=Rx = R∇xE(x,h, c, t).
Let ϕ(z, c, t) = (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)). Then we have
ϕ(Rx,h, c, t) =(∇yE(y,h, c, t)−∇yE(y,h, c, t),∇hE(y,h, c, t))|y=Rx =(R∇xE(x,h, c, t)−R∇xE(x,h, c, t),∇hE(Rx,h, c, t)) =(R(∇xE(x,h, c, t)−∇xE(x,h, c, t)),∇hE(x,h, c, t)).
Thus, ϕ(z, c, t) is equivariant to R. Let ϵ̂θ(z, c, t) = ϵθ(z, t) + √ βt|0ϕ(z, c, t), which is a linear combination of two equivariant functions and is also equivariant to R.
Then, Eq. (6) can be written as
dz = [f(t)z + g(t)2√ βt|0 ϵ̂θ(z, c, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
According to Theorem 1, we know its marginal distribution at time t = 0, i.e., pθ(z0|c), is invariant to any rotational transformation.
B SAMPLING
In Algorithm 1, we present the Euler-Maruyama method to sample from EEGSDE in Eq. 6.
Algorithm 1 Sample from EEGSDE using the Euler-Maruyama method Require: Number of steps N ∆t = TN z ← (x− x,h), where x ∼ N (0, 1), h ∼ N (0, 1) {Sample from the prior pT (zT )} for i = N to 1 do t← i∆t gx ← ∇xE(z, c, t), gh ← ∇hE(z, c, t) {Calculate the gradient of the energy function} g ← (gx − gx, gh) {Subtract the CoM of the gradient} F ← f(t)z + g(t)2( 1√
βt|0 ϵθ(z, t) + g)
ϵ← (ϵx − ϵx, ϵh), where ϵx ∼ N (0, 1), ϵh ∼ N (0, 1) z ← z − F∆t+ g(t) √ ∆tϵ {Update z according to Eq. (6)}
end for return z
C CONDITIONAL NOISE PREDICTION NETWORKS
In Eq. (6), we can alternatively use a conditional noise prediction network ϵθ(z, c, t) for a stronger guidance, as follows
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, c, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)))]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
The conditional noise prediction network is trained similarly to the unconditional one, using the following MSE loss
min θ
EtEq(c,z0,zt)w(t)∥ϵθ(zt, c, t)− ϵt∥ 2.
D PARAMETERIZATION OF NOISE PREDICTION NETWORKS
We parameterize the noise prediction network following Hoogeboom et al. (2022), and we provide the specific parameterization for completeness. For the unconditional model ϵθ(z, t), we first concatenate each atom feature hi and t, which gives hi′ = (hi, t). Then we input x and h′ = (h1′, . . . ,hM ′) to the EGNN as follows
(ax,ah′) = EGNN(x,h′)− (x,0).
Finally, we subtract the CoM of ax, and gets the parameterization of ϵθ(z, t):
ϵθ(z, t) = (a x − ax,ah),
where ah comes from discarding the last component of ah′ that corresponds to the time.
For the conditional model ϵθ(z, c, t), we additionally concatenate c to the atom feature hi, i.e., hi′ = (hi, t, c), and other parts in the parameterization remain the same.
E DETAILS OF ENERGY FUNCTIONS
E.1 PARAMETERIZATION OF TIME-DEPENDENT MODELS
The time-dependent property prediction model g(zt, t) is parameterized using the second component in the output of EGNN (see Section 3) followed by a decoder (Dec):
g(zt, t) = Dec(EGNN h(xt,h ′ t)), h ′ t = concatenate(ht, t), (11)
where the concatenation is performed on each atom feature, and the decoder is a small neural network based on Satorras et al. (2021b). This parameterization ensures that the energy function
E(zt, c, t) is invariant to orthogonal transformations, and thus the distribution of generated samples is also invariant according to Theorem 2.
Similarly, the time-dependent multi-label classifier is parameterized by EGNN as
m(zt, t) = σ(Dec(EGNN h(xt,h ′ t))), h ′ t = concatenate(ht, t).
The multi-label classifier has the same backbone to the property prediction model in Eq. (11), except that the decoder outputs a vector of dimension L, and the sigmoid function σ is adopted for multilabel classification. Similarly to Eq. (11), the EGNN in the multi-label classifier guarantees the invariance of the distribution of generated samples according to Theorem 2.
E.2 TRAINING OBJECTIVES OF ENERGY FUNCTIONS
Time-dependent property prediction model. Since the quantum property is a scalar, we train the time-dependent property prediction model g(zt, t) using the ℓ1 loss
EtEq(c,z0,zt)|g(zt, t)− c|, where t is uniformly sampled from [0, T ].
Time-dependent multi-label classifier. Since the fingerprint is a bit map, predicting it can be viewed as a multi-label classification task. Thus, we use a time dependent multi-label classifier m(zt, t), and train it using the binary cross entropy loss
EtEq(c,x0,xt) L∑ l=1 cl logml(zt, t) + (1− cl) log(1−ml(zt, t)),
where t is uniformly sampled from [0, T ], and ml(zt, t) is the l-th component of m(zt, t).
F EXPERIMENTAL DETAILS
F.1 HOW TO GENERATE MOLECULES TARGETED TO MULTIPLE QUANTUM PROPERTIES
When we want to generate molecules with K quantum properties c = (c1, c2, . . . , cK), we combine energy functions for single properties linearly as E(zt, c, t) = ∑K k=1Ek(zt, ck, t), where Ek(zt, ck, t) = sk|gk(zt, t) − ck|2 is the energy function for the k-th property, sk is the scaling factor and gk(zt, t) is the time-dependent property prediction model for the k-th property. Then we use the gradient of E(zt, c, t) to guide the reverse SDE as described in Eq. (6).
F.2 THE “U-BOUND” AND “#ATOMS” BASELINES
The “U-bound” and “#Atoms” baselines are from Hoogeboom et al. (2022). The “U-bound” baseline shuffles the labels in Db and then calculate the loss of ϕc on it, which can be regarded as an upper bound of the MAE. The “#Atoms” baseline predicts a quantum property c using only the number of atoms in a molecule.
F.3 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
For the noise prediction network, we use the same setting with EDM (Hoogeboom et al., 2022) for a fair comparison, where the models is trained ∼2000 epochs with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
The EGNN used in the energy function has 192 hidden features and 7 layers. We train 2000 epochs with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
During evaluation, we need to generate a set of molecules. Following the EDM (Hoogeboom et al., 2022), we firstly sample the number of atoms in a molecule M ∼ p(M) and the property value c ∼ p(c|M) (or c1 ∼ p(c1|M), c2 ∼ p(c2|M), . . . , cK ∼ p(cK |M) for multiple properties). Here p(M) is the distribution of molecule sizes on training data, and p(c|M) is the distribution of the property on training data. Then we generate a molecule given M, c.
F.4 GENERATING MOLECULES WITH TARGET STRUCTURES
Computation of Tanimoto similarity. Let Sg be the set of bits that are set to 1 in the fingerprint of a generated molecule, and St be the set of bits that are set to 1 in the target structure. The Tanimoto similarity is defined as |Sg ∩ St|/|Sg ∪ St|, where | · | denotes the number of elements in a set. Experimental details on QM9. For the backbone of the noise prediction network, we use a threelayer MLP with 768, 512 and 192 hidden nodes as embedding to encode the fingerprint and add the output of it with the embedding of atom features h, which is then fed into the following EGNN. The EGNN has 256 hidden features and 9 layers. We train it 1500 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The energy function is trained with 1750 epoch with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. Its EGNN has 192 hidden features and 7 layers. For the baseline cG-SchNet, we reproduce it using the public code. Since the default data split in cG-SchNet is different with ours, we train the cG-SchNet under the same data split with ours for a fair comparison. We report results at 200 epochs (there is no gain on similarity metric after 150 epochs). We evaluate the Tanimoto similarity on the whole test set.
Experimental details on GEOM-Drug. We use the same data split of GEOM-Drug with Hoogeboom et al. (2022), where training, validation and test set include 554K, 70K and 70K samples respectively. We train the noise prediction network and the energy function on the training set. For the noise prediction network, we use the recommended hyperparameters of EDM (Hoogeboom et al., 2022), where the EGNN has 256 hidden features and 4 layers, and the other part is the same as that in QM9. We train 10 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The backbone of the energy function is the same as that in QM9 and we train the energy function for 14 epoch. We evaluate the Tanimoto similarity on randomly selected 10K molecules from the test set.
G ADDITIONAL RESULTS
G.1 ABLATION STUDY ON NOISE PREDICTION NETWORKS AND ENERGY GUIDANCE
We perform an ablation study on the conditioning, i.e., use conditional or unconditional noise prediction networks, and the energy guidance. When neither the conditioning nor the energy guidance is adopted, a single unconditional model can’t perform conditional generation, and therefore we only report its atom stability and the molecule stability. As shown in Table 5, Table 6 and Table 7, the conditional noise prediction network improves the MAE compared to the unconditional one, and the energy guidance improves the MAE compared to a sole conditional model. Both the conditioning and the energy guidance do not affect the atom stability and the molecule stability much.
G.2 RESULTS ON NOVELTY, ATOM STABILITY AND MOLECULE STABILITY
For completeness, we also report novelty, atom stability and molecule stability on 10K generated molecules. Below we briefly introduce these metrics.
• Novelty (Simonovsky & Komodakis, 2018) is the proportion of generated molecules that do not appear in the training set. Specifically, let G be the set of generated molecules, the novelty is calculated as 1 − |G∩Db||G| . Note that the novelty is evaluated on Db, since the reproduced conditional EDM and our method are trained on Db. This leads to an inflated value compared to the one evaluated on the whole dataset (Hoogeboom et al., 2022).
• Atom stability (AS) (Hoogeboom et al., 2022) is the proportion of atoms that have the right valency. Molecule stability (MS) (Hoogeboom et al., 2022) is the proportion of generated molecules where all atoms are stable. We use the official implementation for the two metrics from the EDM paper (Hoogeboom et al., 2022).
As shown in Table 8 and Table 9, conditional EDM and EEGSDE with a small scaling factor have a slightly better stability, and EEGSDE with a large scaling factor has a slightly better novelty in general. The additional energy changes the distribution of generated molecules, which improves the novelty of generated molecules in general. Since there is a tradeoff between novelty and stability (see the caption of Table 5 in the EDM paper (Hoogeboom et al., 2022)), a slight decrease on the stability is possible when the scaling factor is large.
G.3 VISUALIZATION OF THE EFFECT OF THE SCALING FACTOR
We visualize the effect of the scaling factor in Figure 4, where the generated structures align better as the scaling factor grows.
G.4 REPRODUCE
We compare our reproduced results and the original results of conditional EDM (Hoogeboom et al., 2022) in Table 10. The results are consistent.
G.5 GENERATING MOLECULES WITH TARGET STRUCTURES ON GEOM-DRUG
We plot generated molecules on GEOM-Drug in Figure 5, and it can be observed that the atom types of generated molecules with EEGSDE often match the target better than conditional EDM.
G.6 THE DISTRIBUTIONS OF PROPERTIES
We plot the distribution of the properties of the training set, as well as the distribution of the properties of generated molecules (calculated by the Gaussian software). As shown in Figure 6, these distributions match well. | 1. What is the focus and contribution of the paper on 3D molecular generation?
2. What are the strengths of the proposed approach, particularly in terms of its equivariance and SDE formulation?
3. What are the weaknesses of the paper, especially regarding the presentation of results and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the method's ability to handle multi-property cases or its distinction from other equivariant models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a new method for 3D molecular generation called energy-guided stochastic differential equations (EEGSDE), which introduces an equivariant diffusion model based on SDE formulations. The authors first introduce the problem setting of 3D molecular design, as well as past work in molecular design and related work including diffusion models and guidance of diffusion models. Subsequently, the paper introduces relevant background and formulations (3D molecule representation, equivariance and Center of Mass modeling) and outlines the mathematical details of EEG-SDE, including the SDE formulation and an extension for an equivariant SDE formulation based on an equivariant neural network backbone (EGNN). After outlining the entire algorithmic flow with all relevant and priorly introduced components, the authors introduce a set of molecular design experiments based primarily on the QM9 dataset.
In their experiments, the authors first train EEG-SDE on the QM9 dataset and compare it's performance to equivariant diffusion models (EDM) along a couple of metrics (MAE, Novelty, Atomic Stability, Molecular Stability). In their formulation, the authors also include a scaling factor,
s
, which serves to modulate the strength of a given loss signal and is varied in many of the authors' experiments. The first set of presented results show general competitiveness with EDM with outperformance along some metrics and lower performance on others. The authors then show an experiment generating molecules based on a target structure where the similarity of the generated molecule to the target structure is based on a molecular fingerprint. Based on the similarity metric EEG-SDE appears to outperform SDE.
Strengths And Weaknesses
Strengths
The paper introduces a new equivariant molecular diffusion method and describes the different pieces, including various definitions and formulations, in great detail.
The results presented in the experiments seem to indicate some advantages of using EEG-SDE compared to EDM.
Weaknesses
Some of the figures, especially Figure 1 and Figure 4, could benefit from greater detail to the overall message they are trying to convey. In Figure 1, the different images could have additional labels to guide the reader as to what the different sections represent. In Figure 4, it is unclear what the authors are trying to show with the caption not being clear about the purpose and it being hard accompanying text being hard to find.
The results on the GEOM-Drug dataset are only briefly mentioned in the text. It would have been nice to see them included in more cohesive manner, potentially in one of the prior tables.
The overall message from the results is somewhat unclear. It would be nice if the paper could summarize some key takeways for EEGSDE based on the results of the experiments?
Additional Questions
Fitting to properties appears to be applicable to one property at a time given the scalar signal. How does the multi-property case shown in Figure 4 work in that instance? Is it mainly a loss over a vector? If so, how is it implemented?
Did you consider trying other equivariant models (SE3-Transformer, DimeNet, etc)? Why, why not?
It would good to have the definitions for the metrics in Table 1 and Table 2 (Similarity, Novelty, Atomic Stability (AS), Molecular Stability (MS)). Putting them in the appendix is fine.
What is the distinction between EDM and this method? A clear distinction would provide more clarity on the novelty - from what I can tell it is the inclusion of SDE formulation. As such, it would be good to clarify the difference between SDE and diffusion models in general terms first. Section 4.1 has SDE definitions, but not a contrast to standard diffusion.
How would you compare your work to Torsion Diffusion (Jing, Bowen, et al. "Torsional Diffusion for Molecular Conformer Generation." arXiv preprint arXiv:2206.01729 (2022).)?
Could you provide more details on the molecular fingerprint - how does it capture 3D similarity related to positions?
Clarity, Quality, Novelty And Reproducibility
Clarity
Clarity in the formulation is detailed and well-written. I think clarity in the results can be improved as indicated in the above sections.
Quality
Quality of the presented experiments appears to done well. The general results sections could be improved by adding clearer conclusions based on the experiments.
Novelty
While the authors introduce a novel equivariant diffusion formulation, it does appear to be a bit incremental from the work performed by EDM which is the primary baseline the authors compare to.
Reproducibility
The authors provide a good amount of detail in their detail and appendix along with a reproducibility statement. |
ICLR | Title
Equivariant Energy-Guided SDE for Inverse Molecular Design
Abstract
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties. In this paper, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
1 INTRODUCTION
The discovery of new molecules with desired properties is critical in many fields, such as the drug and material design (Hajduk & Greer, 2007; Mandal et al., 2009; Kang et al., 2006; Pyzer-Knapp et al., 2015). However, brute-force search in the overwhelming molecular space is extremely challenging. Recently, inverse molecular design (Zunger, 2018) provides an efficient way to explore the molecular space, which directly predicts promising molecules that exhibit desired properties.
A natural way of inverse molecular design is to train a conditional generative model (SanchezLengeling & Aspuru-Guzik, 2018). Formally, it learns a distribution of molecules conditioned on certain properties from data, and new molecules are predicted by sampling from the distribution with the condition set to desired properties. Among them, equivariant diffusion models (EDM) (Hoogeboom et al., 2022) leverage the current state-of-art diffusion models (Ho et al., 2020), which involves a forward process to perturb data and a reverse process to generate 3D molecules conditionally or unconditionally. While EDM generates stable and valid 3D molecules, we argue that a single conditional generative model is insufficient for generating accurate molecules that exhibit desired properties (see Table 1 and Table 3 for an empirical verification).
In this work, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE formalizes the generation process as an equivariant stochastic differential equation, and plugs in energy functions to improve the controllability of generation. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations.
We apply EEGSDE to various applications by carefully designing task-specific energy functions. When targeted to quantum properties, EEGSDE is able to generate more accurate molecules than
∗Equal contribution. †Correspondence to: C. Li and J. Zhu.
EDM, e.g., reducing the mean absolute error by more than 30% on the dipole moment property. When targeted to specific molecular structures, EEGSDE better capture the structure information in molecules than EDM, e.g, improving the similarity to target structures by more than 10%. Furthermore, EEGSDE is able to generate molecules targeted to multiple properties by combining the corresponding energy functions linearly. These demonstrate that our EEGSDE enables a flexible and controllable generation of molecules, providing a smart way to explore the chemical space.
2 RELATED WORK
Diffusion models are initially proposed by Sohl-Dickstein et al. (2015). Recently, they are better understood in theory by connecting it to score matching and stochastic differential equations (SDE) (Ho et al., 2020; Song et al., 2020). After that, diffusion models have shown strong empirical performance in many applications Dhariwal & Nichol (2021); Ramesh et al. (2022); Chen et al. (2020); Kong et al. (2020). There are also variants proposed to improve or accelerate diffusion models (Nichol & Dhariwal, 2021; Vahdat et al., 2021; Dockhorn et al., 2021; Bao et al., 2022b;a; Salimans & Ho, 2022; Lu et al., 2022).
Guidance is a technique to control the generation process of diffusion models. Initially, Song et al. (2020); Dhariwal & Nichol (2021) use classifier guidance to generate samples belonging to a class. Then, the guidance is extended to CLIP (Radford et al., 2021) for text to image generation, and semantic-aware energy (Zhao et al., 2022) for image-to-image translation. Prior guidance methods focus on image data, and are nontrivial to apply to molecules, since they do not consider the geometric symmetry. In contrast, our work proposes a general guidance framework for 3D molecules, where an invariant energy function is employed to leverage the geometric symmetry of molecules.
Molecule generation. Several works attempt to model molecules as 3D objects via deep generative models Nesterov et al. (2020); Gebauer et al. (2019); Satorras et al. (2021a); Hoffmann & Noé (2019); Hoogeboom et al. (2022). Among them, the most relevant one is the equivariant diffusion model (EDM) (Hoogeboom et al., 2022), which generates molecules in an iterative denoising manner. Benefiting from recent advances of diffusion models, EDM is stable to train and is able to generate high quality molecules. We provide a formal description of EDM in Section 3. Some other methods generate simplified representations of molecules, such as 1D SMILES strings (Weininger, 1988) and 2D graphs of molecules. These include variational autoencoders (Kusner et al., 2017; Dai et al., 2018; Jin et al., 2018; Simonovsky & Komodakis, 2018; Liu et al., 2018), normalizing flows (Madhawa et al., 2019; Zang & Wang, 2020; Luo et al., 2021), generative adversarial networks (Bian et al., 2019; Assouel et al., 2018), and autoregressive models (Popova et al., 2019; Flam-Shepherd et al., 2021). There are also methods on generating torsion angles in molecules. For
instance, Torsional Diffusion (Jing et al., 2022) employs the SDE formulation of diffusion models to model torsion angles in a given 2D molecular graph for the conformation generation task.
Inverse molecular design. Generative models have been applied to inverse molecular design. For example, conditional autoregressive models (Gebauer et al., 2022) and EDM (Hoogeboom et al., 2022) directly generate 3D molecules with desired quantum properties. Gebauer et al. (2019) also finetune pretrained generative models on a biased subset to generate 3D molecules with small HOMO-LUMO gaps. In contrast to these conditional generative models, our work further proposes a guidance method, a flexible way to control the generation process of molecules. Some other methods apply optimization methods to search molecules with desired properties, such as reinforcement learning (Zhou et al., 2019; You et al., 2018) and genetic algorithms (Jensen, 2019; Nigam et al., 2019). These optimization methods generally consider the 1D SMILES strings or 2D graphs of molecules, and the 3D information is not provided.
3 BACKGROUND
3D representation of molecules. Suppose a molecule has M atoms and let xi ∈ Rn (n = 3 in general) be the coordinate of the ith atom. The collection of coordinates x = (x1, . . . ,xM ) ∈ RMn determines the conformation of the molecule. In addition to the coordinate, each atom is also associated with an atom feature, e.g., the atom type. We use hi ∈ Rd to represent the atom feature of the ith atom, and use h = (h1, . . . ,hM ) ∈ RMd to represent the collection of atom features in a molecule. We use a tuple z = (x,h) to represent a molecule, which contains both the 3D geometry information and the atom feature information.
Equivariance and invariance. Suppose R is a transformation. A distribution p(x,h) is said to be invariant to R, if p(x,h) = p(Rx,h) holds for all x and h. Here Rx = (Rx1, . . . ,RxM ) is applied to each coordinate. A function (ax,ah) = f(x,h) that have two components ax,ah in its output is said to be equivariant to R, if f(Rx,h) = (Rax,ah) holds for all x and h. A function f(x,h) is said to be invariant to R, if f(Rx,h) = f(x,h) holds for all x and h.
Zero CoM subspace. It has been shown that the invariance to translational and rotational transformations is an important factor for the success of 3D molecule modeling (Köhler et al., 2020; Xu et al., 2022). However, the translational invariance is impossible for a distribution in the full space RMn (Satorras et al., 2021a). Nevertheless, we can view two collections of coordinates x and y as equivalent if x can be translated from y, since the translation doesn’t change the identity of a molecule. Such an equivalence relation partitions the whole space RMn into disjoint equivalence classes. Indeed, all elements in the same equivalence classes represent the same conformation, and we can use the element with zero center of mass (CoM), i.e., 1M ∑M i=1 x
i = 0, as the specific representation. These elements collectively form the zero CoM linear subspace X (Xu et al., 2022; Hoogeboom et al., 2022), and the rest of the paper always uses elements in X to represent conformations.
Equivariant graph neural network. Satorras et al. (2021b) propose equivariant graph neural networks (EGNNs), which incorporate the equivariance inductive bias into neural networks. Specifically, (ax,ah) = EGNN(x,h) is a composition of L equivariant convolutional layers. The l-th layer takes the tuple (xl,hl) as the input and outputs an updated version (xl+1,hl+1), as follows:
mij = Φm(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θm), w ij = Φw(m ij ;θw), h i l+1 = Φh(h i l, ∑ j ̸=i wijmij ;θh),
xil+1 = x i l + ∑ j ̸=i
xil − x j l
∥xil − x j l ∥2 + 1
Φx(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θx),
where Φm,Φw,Φh,Φx are parameterized by fully connected neural networks with parameters θm,θw,θh,θx respectively, and eij are optional feature attributes. We can verify that these layers are equivariant to orthogonal transformations, which include rotational transformations as special cases. As their composition, the EGNN is also equivariant to orthogonal transformations. Furthermore, let EGNNh(x,h) = ah, i.e., the second component in the output of the EGNN. Then EGNNh(x,h) is invariant to orthogonal transformations.
Equivariant diffusion models (EDM) (Hoogeboom et al., 2022) are a variant of diffusion models for molecule data. EDMs gradually inject noise to the molecule z = (x,h) via a forward process
q(z1:N |z0)= N∏ n=1 q(zn|zn−1), q(zn|zn−1) = NX(xn| √ αnxn−1, βn)N (hn| √ αnhn−1, βn), (1)
where αn and βn represent the noise schedule and satisfy αn + βn = 1, and NX represent the Gaussian distribution in the zero CoM subspace X (see its formal definition in Appendix A.2). Let αn = α1α2 · · ·αn, βn = 1− αn and β̃n = βnβn−1/βn. To generate samples, the forward process is reversed using a Markov chain:
p(z0:N )=p(zN ) N∏ n=1 p(zn−1|zn), p(zn−1|zn)=NX(xn−1|µxn(zn), β̃n)N(hn−1|µhn(zn), β̃n). (2)
Here p(zN ) = NX(xN |0, 1)N (hN |0, 1). The mean µn(zn) = (µxn(zn),µhn(zn)) is parameterized by a noise prediction network ϵθ(zn, n), and is trained using a MSE loss, as follows:
µn(zn) = 1√ αn (zn − βn√ βn ϵθ(zn, n)), min θ EnEq(z0,zn)w(t)∥ϵθ(zn, n)− ϵn∥ 2,
where ϵn = zn− √ αnz0√ βn is the standard Gaussian noise injected to z0 and w(t) is the weight term. Hoogeboom et al. (2022) show that the distribution of generated samples p(z0) is invariant to rotational transformations if the noise prediction network is equivariant to orthogonal transformations. In Section 4.2, we extend this proposition to the SDE formulation of molecular diffusion modelling.
Hoogeboom et al. (2022) also present a conditional version of EDM for inverse molecular design by adding an extra input of the condition c to the noise prediction network as ϵθ(zn, c, n).
4 EQUIVARIANT ENERGY-GUIDED SDE
In this part, we introduce our equivariant energy-guided SDE (EEGSDE), as illustrated in Figure 1. EEGSDE is based on the SDE formulation of molecular diffusion modeling, which is described in Section 4.1 and Section 4.2. Then, we formally present our EEGSDE that incorporates an energy function to guide the molecular generation in Section 4.3. We provide derivations in Appendix A.
4.1 SDE IN THE PRODUCT SPACE
Recall that a molecule is represented as a tuple z = (x,h), where x = (x1, . . . ,xM ) ∈ X represents the conformation and h = (h1, . . . ,hM ) ∈ RMd represents atom features. Here X = {x ∈ RMn : 1M ∑M i=1 x
i = 0} is the zero CoM subspace mentioned in Section 3, and d is the feature dimension. We first introduce a continuous-time diffusion process {zt}0≤t≤T in the product spaceX×RMd, which gradually adds noise to x and h. This can be described by the forward SDE:
dz = f(t)zdt+ g(t)d(wx,wh), z0 ∼ q(z0), (3)
where f(t) and g(t) are two scalar functions, wx and wh are independent standard Wiener processes in X and RMd respectively, and the SDE starts from the data distribution q(z0). Note that wx can be constructed by subtracting the CoM of a standard Wiener process w in RMn, i.e., wx = w−w, where w = 1M ∑M i=1 w
i is the CoM of w = (w1, . . . ,wM ). It can be shown that the SDE has a linear Gaussian transition kernel q(zt|zs) = q(xt|xs)q(ht|hs) from xs to xt, where 0 ≤ s < t ≤ T . Specifically, there exists two scalars αt|s and βt|s, s.t., q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here NX denotes the Gaussian distribution in the subspace X , and see Appendix A.2 for its formal definition. Indeed, the forward process of EDM in Eq. (1) is a discretization of the forward SDE in Eq. (3).
To generate molecules, we reverse Eq. (3) from T to 0. Such a time reversal forms another a SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (4)
Here qt(z) is the marginal distribution of zt, ∇x log qt(z) is the gradient of log qt(z) w.r.t. x1, ∇x log qt(z) = 1M ∑M i=1∇xi log qt(z) is the CoM of ∇x log qt(z), dt is the infinitesimal negative timestep, w̃x and w̃h are independent reverse-time standard Wiener processes in X and RMd respectively, and ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise injected to z0. Compared to the
original SDE introduced by Song et al. (2020), our reverse SDE in Eq. (4) additionally subtracts the CoM of∇x log qt(z). This ensures xt always stays in the zero CoM subspace as time flows back. To sample from the reverse SDE in Eq. (4), we use a noise prediction network ϵθ(zt, t) to estimate Eq(z0|zt)ϵt, through minimizing the MSE loss minθ EtEq(z0,zt)w(t)∥ϵθ(zt, t) − ϵt∥2, where t is uniformly sampled from [0, T ], and w(t) controls the weight of the loss term at time t. Note that the noise ϵt is in the product space X × RMd, so we subtract the CoM of the predicted noise of xt to ensure ϵθ(zt, t) is also in the product space.
Substituting ϵθ(zt, t) into Eq. (4), we get an approximate reverse-time SDE parameterized by θ:
dz = [f(t)z + g(t)2√ βt|0 ϵθ(z, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (5)
where pT (zT ) = NX(xT |0, 1)N (hT |0, 1) is a Gaussian prior in the product space that approximates qT (zT ). We define pθ(z0) as the marginal distribution of Eq. (5) at time t = 0, which is the distribution of our generated samples. Similarly to the forward process, the reverse process of EDM in Eq. (2) is a discretization of the reverse SDE in Eq. (5).
4.2 EQUIVARIANT SDE
To leverage the geometric symmetry in 3D molecular conformation, pθ(z0) should be invariant to translational and rotational transformations. As mentioned in Section 3, the translational invariance of pθ(z0) is already satisfied by considering the zero CoM subspace. The rotational invariance can be satisfied if the noise prediction network is equivariant to orthogonal transformations, as summarized in the following theorem: Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
As mentioned in Section 3, the EGNN satisfies the equivariance constraint, and we parameterize ϵθ(zt, t) using an EGNN following Hoogeboom et al. (2022). See details in Appendix D.
4.3 EQUIVARIANT ENERGY-GUIDED SDE
Now we describe equivariant energy-guided SDE (EEGSDE), which guides the generated molecules of Eq. (5) towards desired properties c by leveraging a time-dependent energy function E(z, c, t):
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t))︸ ︷︷ ︸ energy gradient taken in the product space )]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (6)
1While qt(z) is defined in X ×RMd, its domain can be extended to RMn ×RMd and the gradient is valid. See Remark 1 in Appendix A.2 for details.
which defines a distribution pθ(z0|c) conditioned on the property c. Here the CoM ∇xE(z, c, t) of the gradient is subtracted to keep the SDE in the product space, which ensures the translational invariance of pθ(z0|c). Besides, the rotational invariance is satisfied by using energy invariant to orthogonal transformations, as summarized in the following theorem:
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Note that we can also use a conditional model ϵθ(z, c, t) in Eq. (6). See Appendix C for details. To sample from pθ(z0|c), various solvers can be used for Eq. (6), such as the Euler-Maruyama method (Song et al., 2020) and the Analytic-DPM sampler (Bao et al., 2022b;a). We present the Euler-Maruyama method as an example in Algorithm 1 at Appendix B.
4.4 HOW TO DESIGN THE ENERGY FUNCTION
Our EEGSDE is a general framework, which can be applied to various applications by specifying different energy functions E(z, c, t). For example, we can design the energy function according to consistency between the molecule z and the property c, where a low energy represents a well consistency. As the generation process in Eq. (6) proceeds, the gradient of the energy function encourages generated molecules to have a low energy, and consequently a well consistency. Thus, we can expect the generated molecule z aligns well with the property c. In the rest of the paper, we specify the choice of energy functions, and show these energies improve controllable molecule generation targeted to quantum properties, molecular structures, and even a combination of them.
Remark. The term “energy” in this paper refers to a general notion in statistical machine learning, which is a scalar function that captures dependencies between input variables (LeCun et al., 2006). Thus, the “energy” in this paper can be set to a MSE loss when we want to capture how the molecule align with the property (as done in Section 5). Also, the “energy” in this paper does not exclude potential energy or free energy in chemistry, and they might be applicable when we want to generate molecules with small potential energy or free energy.
5 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
Let c ∈ R be a certain quantum property. To generate molecules with the desired property, we set the energy function as the squared error between the predicted property and the desired property E(zt, c, t) = s|g(zt, t)− c|2, where g(zt, t) is a time-dependent property prediction model, and s is the scaling factor controlling the strength of the guidance. Specifically, g(zt, t) can be parameterized by equivariant models such as EGNN (Satorras et al., 2021b), SE3-Transformer (Fuchs et al., 2020) and DimeNet (Klicpera et al., 2020) to ensure the invariance of E(zt, c, t), as long as they perform well in the task of property prediction. In this paper we consider EGNN. We provide details on parameterization and the training objective in Appendix E. We can also generate molecules targeted to multiple quantum properties by combining energy functions linearly (see details in Appendix F.1).
5.1 SETUP
We evaluate on QM9 (Ramakrishnan et al., 2014), which contains quantum properties and coordinates of ∼130k molecules with up to nine heavy atoms from (C, N, O, F). Following EDM, we split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively. The training set is further divided into two non-overlapping halves Da, Db equally. The noise prediction network and the time-dependent property prediction model of the energy function are trained on Db separately. By default, EEGSDE uses a conditional noise prediction network ϵθ(z, c, t) in Eq. (6), since we find it generate more accurate molecules than using an unconditional one (see Appendix G.1 for an ablation study). See more details in Appendix F.3.
Evaluation metric: Following EDM, we use the mean absolute error (MAE) to evaluate how generated molecules align with the desired property. Specifically, we train another property prediction model ϕp (Satorras et al., 2021b) onDa, and the MAE is calculated as 1K ∑K i=1 |ϕp(zi)−ci|, where zi is a generated molecule, ci is its desired property. We generate K=10,000 samples for evaluation with ϕp. For fairness, the property prediction model ϕp is different from g(zt, t) used in the energy
function, and they are trained on the two non-overlapping training subsetsDa, Db respectively. This ensures no information leak occurs when evaluating our EEGSDE. To further verify the effectiveness of EEGSDE, we also calculate the MAE without relying on a neural network. Specifically, we use the Gaussian software (which calculates properties according to theories of quantum chemistry without using neural networks) to calculate the properties of 100 generated molecules. For completeness, we also report the novelty, the atom stability and the molecule stability following Hoogeboom et al. (2022) in Appendix G.2, although they are not our main focus.
Baseline: The most direct baseline is conditional EDM, which only adopts a conditional noise prediction network. We also compare two additional baselines “U-bound” and “#Atoms” from Hoogeboom et al. (2022) (see Appendix F.2 for details).
5.2 RESULTS
Following Hoogeboom et al. (2022), we consider six quantum properties in QM9: polarizability α, highest occupied molecular orbital energy εHOMO, lowest unoccupied molecular orbital energy εLUMO, HOMO-LUMO gap ∆ε, dipole moment µ and heat capacity Cv . Firstly, we generate
molecules targeted to one of these six properties. As shown in Table 1, with the energy guidance, our EEGSDE has a significantly better MAE than the conditional EDM on all properties. Remarkably, with a proper scaling factor s, the MAE of EEGSDE is reduced by more than 25% compared to conditional EDM on properties ∆ε, εLUMO, and more than 30% on µ. What’s more, as shown in Table 2, our EEGSDE still has better MAE under the evaluation by the Gaussian software, which further verifies the effectiveness of our EEGSDE. We further generate molecules targeted to multiple quantum properties by combining energy functions linearly. As shown in Table 3, our EEGSDE still has a significantly better MAE than the conditional EDM.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the desired properties than molecules generated by the conditional EDM baseline (for both the singleproperty and multiple-properties cases). As a consequence, EEGSDE is able to explore the chemical space in a guided way to generate promising molecules for downstream applications such as the virtual screening, which may benefit drug and material discovery.
6 GENERATING MOLECULES WITH TARGET STRUCTURES
Following Gebauer et al. (2022), we use the molecular fingerprint to encode the structure information of a molecule. The molecular fingerprint c = (c1, . . . , cL) is a series of bits that capture the presence or absence of substructures in the molecule. Specifically, a substructure is mapped to a specific position l in the bitmap, and the corresponding bit cl will be 1 if the substructure exists in the molecule and will be 0 otherwise. To generate molecules with a specific structure (encoded by the fingerprint c), we set the energy function as the squared errorE(zt, c, t) = s∥m(zt, t)−c∥2 between a time-dependent multi-label classifier m(zt, t) and c. Here s is the scaling factor, and m(zt, t) is trained with binary cross entropy loss to predict the fingerprint as detailed in Appendix E.2. Note that the choice of the energy function is flexible and can be different to the training loss of m(zt, t). In initial experiments, we also try binary cross entropy loss for the energy function, but we find it causes the generation process unstable. The multi-label classifierm(zt, t) is parameterized in a similar way to the property prediction model g(zt, t) in Section 5, and we present details in Appendix E.1.
6.1 SETUP
We evaluate on QM9 and GEOM-Drug. We train our method (including the noise prediction network and the multi-label classifier) on the whole training set. By default, we use a conditional noise prediction network ϵθ(z, c, t) in Eq. (6) for a better performance. See more details in Appendix F.4.
Evaluation metric: To measure how the structure of a generated molecule aligns with the target one, we use the Tanimoto similarity (Gebauer et al., 2022), which captures similarity between structures by comparing their fingerprints.
Baseline: The most direct baseline is conditional EDM (Hoogeboom et al., 2022), which only adopts a conditional noise prediction network without the guidance of an energy model. We also consider cG-SchNet (Gebauer et al., 2022), which generates molecules in an autoregressive manner.
6.2 RESULTS
As shown in Table 4, EEGSDE significantly improves the similarity between target structures and generated structures compared to conditional EDM and cG-SchNet on QM9. Also note in a proper range, a larger scaling factor results in a better similarity, and EEGSDE with s=1 improves the sim-
ilarity by more than 10% compared to conditional EDM. In Figure 2, we plot generated molecules of conditional EDM and EEGSDE (s=1) targeted to specific structures, where our EEGSDE aligns better with them. We further visualize the effect of the scaling factor in Appendix G.3, where the generated structures align better as the scaling factor grows. These results demonstrate that our EEGSDE captures the structure information in molecules well.
We also perform experiments on the more challenging GEOM-Drug (Axelrod & Gomez-Bombarelli, 2022) dataset, and we train the conditional EDM baseline following the default setting of Hoogeboom et al. (2022). As shown in Table 4, we find the conditional EDM baseline has a similarity of 0.165, which is much lower than the value on QM9. We hypothesize this is because molecules in GEOM-Drug has much more atoms than QM9 with a more complex structure, and the default setting in Hoogeboom et al. (2022) is suboptimal. For example, the conditional EDM on GEOM-Drug has a smaller number of parameters than the conditional EDM on QM9 (15M v.s. 26M), which is insufficient to capture the structure information. Nevertheless, our EEGSDE still improves the similarity by ∼%17. We provide generated molecules on GEOM-Drug in Appendix G.5. Finally, we demonstrate that our EEGSDE is a flexible framework to generate molecules targeted to multiple properties, which is often the practical case. We additionally target to the quantum property α (polarizability) on QM9 by combining the energy function for structures in this section and the energy function for quantum properties in Section 5. Here we choose α = 100 Bohr3, which is a relatively large value, and we expect it to encourage less isometrically shaped structures. As shown in Figure 3, the generated molecule aligns better with the target structure as the scaling factor s1 grows, and meanwhile a ring substructure in the generated molecule vanishes as the scaling factor for polarizability s2 grows, leading to a less isometrically shaped structure, which is as expected.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the target structure than molecules generated by the conditional EDM baseline. Besides, EEGSDE can generate molecules targeted to both specific structures and desired quantum properties by combining energy functions linearly. As a result, EEGSDE may benefit practical cases in molecular design when multiple properties should be considered at the same time.
7 CONCLUSION
This work presents equivariant energy-guided SDE (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. EEGSDE significantly improves the conditional EDM baseline in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
ACKNOWLEDGMENTS
We thank Han Guo and Zhen Jia for their help with their expertise in chemistry. This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098; a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13). J.Z was also supported by the XPlorer Prize. C. Li was also sponsored by Beijing Nova Program.
ETHICS STATEMENT
Inverse molecular design is critical in fields like material science and drug discovery. Our EEGSDE is a flexible framework for inverse molecular design and thus might benefit these fields. Currently the negative consequences are not obvious.
REPRODUCIBILITY STATEMENT
Our code is included in the supplementary material. The implementation of our experiment is described in Section 5 and Section 6. Further details such as the hyperparameters of training and model backbones are provided in Appendix F. We provide complete proofs and derivations of all theoretical results in Appendix A.
A DERIVATIONS
A.1 ZERO COM SUBSPACE
The zero CoM subspace X = {x ∈ RMn : ∑M i=1 x
i = 0} is a (M − 1)n dimensional subspace of RMn. Therefore, there exists an isometric isomorphism ϕ from R(M−1)n to X , i.e., ϕ is a linear bijection from R(M−1)n to X , and ∥ϕ(x̂)∥2 = ∥x̂∥2 for all x̂ ∈ R(M−1)n. We use Aϕ ∈ RMn×(M−1)n represent the matrix corresponding to ϕ, so we have ϕ(x̂) = Aϕx̂. An important property of the isometric isomorphism is that AϕA⊤ϕx = x − x for all x ∈ RMn. We show the proof as following. Proposition 1. Suppose ϕ is an isometric isomorphism from R(M−1)n to X , and let Aϕ ∈ RMn×(M−1)n be the matrix corresponding to ϕ. Then we have AϕA⊤ϕx = x−x for all x ∈ RMn, where x = 1M ∑M i=1 x i.
Proof. We consider a new subspace of RMn, X⊥ = {x ∈ RMn : x ⊥ X}, i.e., the orthogonal component of X . We can verify that X⊥ = {x ∈ RMn : x1 = x2 = · · · = xM}, and X⊥ is n dimensional. Thus, there exists an isometric isomorphism ψ from Rn toX⊥. LetAψ ∈ RMn×n represent the matrix corresponding to ψ. Then we define λ(x̂, ŷ) = ϕ(x̂)+ψ(ŷ), where x̂ ∈ R(M−1)n and ŷ ∈ Rn. The image of λ is {x + y : x ∈ X,y ∈ X⊥} = RMn. Therefore, λ is a linear bijection from RMn to RMn, and the matrix corresponding to λ is Aλ = [Aϕ, Aψ]. Furthermore, λ is an isometric isomorphism, since ∥λ(x̂, ŷ)∥2 = ∥ϕ(x̂)∥2 + ∥ψ(ŷ)∥2 + 2 ⟨ϕ(x̂), ψ(ŷ)⟩ = ∥x̂∥2 + ∥ŷ∥2 = ∥(x̂⊤, ŷ⊤)⊤∥2. This means Aλ is orthogonal transformation. Therefore, AλA ⊤ λ = AϕA ⊤ ϕx + AψA ⊤ ψ = I and AϕA ⊤ ϕx + AψA ⊤ ψx = x. Since AϕA ⊤ ϕx ∈ X and AψA ⊤ ψx ∈ X⊤, we can conclude that AϕA⊤ϕx is the orthogonal projection of x to X , which is exactly x− x.
Since R(M−1)n and X are two intrinsically equivalent spaces, an equivalence of distributions in these two spaces can also be established, as shown in the following propositions. Proposition 2. Suppose x is a random vector distributed in X , and x̂ = ϕ−1(x) is its equivalent representation in R(M−1)n. If x ∼ q(x), then x̂ ∼ q̂(x̂), where q̂(x̂) = q(ϕ(x̂)). Proposition 3. Suppose x,y is are two random vectors distributed in X , and x̂ = ϕ−1(x), ŷ = ϕ−1(y) are their equivalent representations in R(M−1)n. If x|y ∼ q(x|y), then x̂|ŷ ∼ q̂(x̂|ŷ), where q̂(x̂|ŷ) = q(ϕ(x̂)|ϕ(ŷ))).
A.2 SDE IN THE PRODUCT SPACE
Definition 1 (Gaussian distributions in the zero CoM subspace). Suppose µ ∈ X . Let NX(x|µ, σ2) := (2πσ2)−(M−1)n/2 exp(− 12σ2 ∥x−µ∥ 2 2), which is the isotropic Gaussian distribution with mean µ and variance σ2 in the zero CoM subspace X .
Proposition 4 (Transition kernels of a SDE in the zero CoM subspace). Suppose dx = f(t)xdt+ g(t)dwx, x0 ∼ q(x0) is a SDE in the zero CoM subspace. Then the transition kernel from xs to xt (0 ≤ s < t ≤ T ) can be expressed as q(xt|xs) = NX(xt| √ αt|sxs, βt|s), where αt|s =
exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·).
Proof. Firstly, we map the process {xt}Tt=0 in the zero CoM subspace to the equivalent space R(M−1)n through the isometric isomorphism ϕ introduced in Appendix A.1. This produces a new process {x̂}Tt=0, where x̂t = ϕ−1(xt). By applying ϕ−1 to the SDE in the zero CoM subspace, we know x̂t = ϕ−1(xt) satisfies the following SDE in R(M−1)n:
dx̂ = f(t)x̂dt+ g(t)dŵ, x̂0 ∼ q̂(x̂0), (7) where ŵ is the standard Wiener process in R(M−1)n and q̂(x̂0) = q(ϕ(x̂0)). According to Song et al. (2020), the transition of Eq. (7) from x̂s to x̂t (s < t) can be expressed as q̂(x̂t|x̂s) = N (x̂t| √ αt|sx̂s, βt|sI), where αt|s = exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·). According to Proposition 3, we know the transition kernel q(xt|xs) from xs to xt satisfies
q(xt|xs) =q̂(ϕ−1(xt)|ϕ−1(xs)) = (2πβt|s)−(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt)−
√ αt|sϕ −1(xs)∥22)
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt −
√ αt|sxs)∥22) // linearity of ϕ−1
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥xt −
√ αt|sxs∥22) // norm preserving of ϕ−1
=NX(xt| √ αt|sxs, βt|s).
Proposition 5 (Transition kernels of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then the transition kernel from zs to zt (0 ≤ s < t ≤ T ) can be expressed as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here αt|s and βt|s are defined as in Proposition 4.
Proof. Since wx and wh are independent to each other, the transition kernel from zs to zt can be factorized as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) is the transition kernel of dx = f(t)xdt+ g(t)dwx and q(ht|hs) is the transition kernel of dh = f(t)hdt+ g(t)dwh. According to Proposition 4 and Song et al. (2020), we have q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s).
Remark 1. The marginal distribution of zt is
qt(zt) = ∫ Z ∫ RMd q(z0)q(xt|x0)q(ht|h0)dh0λ(dx0),
where λ is the Lebesgue measure in the zero CoM subspace X . While qt(zt) is a distribution in X × RMd, it has a natural differentiable extension to RMn × RMd, since q(xt|x0) = NX(xt| √ αt|sxs, βt|s) has a differentiable extension to RMn according to Definition 1. Thus, we can take gradient of qt(zt) w.r.t. xt in the whole space RMn. Proposition 6 (Time reversal of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then its time reversal satisfies the following reverse-time SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ),
where w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and ϵt =
zt−√αt|0z0√ βt|0 is the standard Gaussian noise in X × RMd injected to z0.
Furthermore, we have∇x log qt(x)−∇x log qt(x) = − 1√ βt|0
Eq(x0|xt)ϵt, where ϵt = xt−√αt|0x0√
βt|0
is the standard Gaussian noise in the zero CoM subspace injected to x0.
Proof. Let ẑt = (x̂t,ht), where x̂t = ϕ−1(xt), introduced in the proof of Proposition 4. Then {ẑt}Tt=0 is a process in R(M−1)n × RMd determined by the following SDE
dẑ = f(t)ẑdt+ g(t)dŵ, ẑ0 ∼ q̂(ẑ0), (8)
where ŵ is the standard Wiener process in R(M−1)n × RMd and q̂(ẑ0) = q(ϕ(x̂0),h). According to Song et al. (2020), Eq. (8) has a time reversal:
dẑ = [f(t)ẑ − g(t)2∇ẑ log q̂t(ẑ)]dt+ g(t)dw̃, ẑT ∼ q̂T (ẑT ), (9)
where q̂t(z) is the marginal distribution of ẑt, which satisfies q̂t(ẑt) = qt(ϕ(x̂t),ht) according to Proposition 2, and w̃ is the reverse-time standard Wiener process in R(M−1)n × RMd. Then we apply the linear transformation T ẑ = (ϕ(x̂),h) to Eq. (9), which maps ẑt back to zt. This yields
dz = [f(t)z − g(t)2T (∇ẑ log q̂t(ẑ))]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (10)
Here w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and T (∇ẑ log q̂t(ẑ)) can be expressed as (ϕ(∇x̂ log q̂t(ẑ)),∇h log q̂t(ẑ)). Since ∇x̂ log q̂t(ẑ) = A⊤ϕ∇x log qt(x,h) = A⊤ϕ∇x log qt(z), we have ϕ(∇x̂ log q̂t(ẑ)) = AϕA ⊤ ϕ∇x log qt(z), where Aϕ represents the matrix corresponding to ϕ. According to Proposition 1, we have ϕ(∇x̂ log q̂t(ẑ)) = ∇x log qt(z) − ∇x log qt(z). Besides, ∇h log q̂t(ẑ) = ∇h log qt(z). Thus, Eq. (10) can be written as
dz = [f(t)z − g(t)2(∇x log qt(z)−∇x log qt(z),∇h log qt(z))]dt+ g(t)d(w̃x, w̃h),
which is the score function form of the reverse-time SDE.
We can also write the score function in Eq. (9) as ∇ẑ log q̂t(ẑ) = Eq̂(ẑ0|ẑt)∇ẑ log q̂(ẑt|ẑ0) = − 1√
βt|0 Eq̂(ẑ0|ẑt)ϵ̂t, where ϵ̂t = ẑt−√αt|0ẑ0√ βt|0 is the standard Gaussian noise injected to ẑ0. With
this expression, we have T (∇ẑ log q̂t(ẑ)) = − 1√ βt|0 Eq̂(ẑ0|ẑt)T (ϵ̂t) = − 1√βt|0Eq(z0|zt)ϵt, where ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise in X ×RMd injected to z0. Thus, Eq. (10) can also
be written as
dz = [f(t)z + g(t)2√ βt|0 Eq(z0|zt)ϵt]dt+ g(t)d(w̃x, w̃h),
which is the noise prediction form of the reverse-time SDE.
A.3 EQUIVARIANCE
Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Let zθt = (xθt ,hθt ) (0 ≤ t ≤ T ) be the process determined by Eq. (5). Let yθt = Rx θ t and u θ t = (y θ t ,h θ t ). We use p u t (ut) and p z t (zt) to denote the distributions of uθt and z θ t respectively, and they satisfy p u t (yt,ht) = p z t (R −1yt,ht).
By applying the transformation Tz = (Rx,h) to Eq. (5), we know the new process {uθt }Tt=0 satisfies the following SDE:
du = Tdz = [f(t)u+ g(t)2√ βt|0 (Rϵxθ(z, t), ϵ h θ(z, t))]dt+ g(t)d(Rw̃x, w̃h)
=[f(t)u+ g(t)2√ βt|0 (ϵxθ(Rx,h, t), ϵ h θ(Rx,h, t))]dt+ g(t)d(Rw̃x, w̃h) // by equivariance =[f(t)u+ g(t)2√ βt|0 ϵθ(u, t)]dt+ g(t)d(w̃x, w̃h), // Winner process is isotropic
and its initial distribution is puT (uT ) = p u T (yT ,hT ) = p z T (R −1yT ,hT ) = pT (R −1yT ,hT ) = pT (yT ,hT ) = pT (uT ). Thus, the SDE of {uθt }Tt=0 is exactly the same to that of {zθt }Tt=0. This indicates that the distribution of uθ0 is the same to the distribution of z θ 0 , i.e., p u 0 (u0) = pz0 (u0) = pθ(u0). Also note that p u 0 (u0) = p z 0 (R −1y0,h0) = pθ(R −1y0,h0). Thus, pθ(u0) = pθ(R −1y0,h0), and consequently pθ(Rx0,h0) = pθ(x0,h0). This means pθ(z0) is invariant to any orthogonal transformation, which includes rotational transformations as special cases.
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Taking gradient to both side of E(Rx,h, c, t) = E(x,h, c, t) w.r.t. x, we get
R⊤∇yE(y,h, c, t)|y=Rx = ∇xE(x,h, c, t).
Multiplying R to both sides, we get
∇yE(y,h, c, t)|y=Rx = R∇xE(x,h, c, t).
Let ϕ(z, c, t) = (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)). Then we have
ϕ(Rx,h, c, t) =(∇yE(y,h, c, t)−∇yE(y,h, c, t),∇hE(y,h, c, t))|y=Rx =(R∇xE(x,h, c, t)−R∇xE(x,h, c, t),∇hE(Rx,h, c, t)) =(R(∇xE(x,h, c, t)−∇xE(x,h, c, t)),∇hE(x,h, c, t)).
Thus, ϕ(z, c, t) is equivariant to R. Let ϵ̂θ(z, c, t) = ϵθ(z, t) + √ βt|0ϕ(z, c, t), which is a linear combination of two equivariant functions and is also equivariant to R.
Then, Eq. (6) can be written as
dz = [f(t)z + g(t)2√ βt|0 ϵ̂θ(z, c, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
According to Theorem 1, we know its marginal distribution at time t = 0, i.e., pθ(z0|c), is invariant to any rotational transformation.
B SAMPLING
In Algorithm 1, we present the Euler-Maruyama method to sample from EEGSDE in Eq. 6.
Algorithm 1 Sample from EEGSDE using the Euler-Maruyama method Require: Number of steps N ∆t = TN z ← (x− x,h), where x ∼ N (0, 1), h ∼ N (0, 1) {Sample from the prior pT (zT )} for i = N to 1 do t← i∆t gx ← ∇xE(z, c, t), gh ← ∇hE(z, c, t) {Calculate the gradient of the energy function} g ← (gx − gx, gh) {Subtract the CoM of the gradient} F ← f(t)z + g(t)2( 1√
βt|0 ϵθ(z, t) + g)
ϵ← (ϵx − ϵx, ϵh), where ϵx ∼ N (0, 1), ϵh ∼ N (0, 1) z ← z − F∆t+ g(t) √ ∆tϵ {Update z according to Eq. (6)}
end for return z
C CONDITIONAL NOISE PREDICTION NETWORKS
In Eq. (6), we can alternatively use a conditional noise prediction network ϵθ(z, c, t) for a stronger guidance, as follows
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, c, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)))]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
The conditional noise prediction network is trained similarly to the unconditional one, using the following MSE loss
min θ
EtEq(c,z0,zt)w(t)∥ϵθ(zt, c, t)− ϵt∥ 2.
D PARAMETERIZATION OF NOISE PREDICTION NETWORKS
We parameterize the noise prediction network following Hoogeboom et al. (2022), and we provide the specific parameterization for completeness. For the unconditional model ϵθ(z, t), we first concatenate each atom feature hi and t, which gives hi′ = (hi, t). Then we input x and h′ = (h1′, . . . ,hM ′) to the EGNN as follows
(ax,ah′) = EGNN(x,h′)− (x,0).
Finally, we subtract the CoM of ax, and gets the parameterization of ϵθ(z, t):
ϵθ(z, t) = (a x − ax,ah),
where ah comes from discarding the last component of ah′ that corresponds to the time.
For the conditional model ϵθ(z, c, t), we additionally concatenate c to the atom feature hi, i.e., hi′ = (hi, t, c), and other parts in the parameterization remain the same.
E DETAILS OF ENERGY FUNCTIONS
E.1 PARAMETERIZATION OF TIME-DEPENDENT MODELS
The time-dependent property prediction model g(zt, t) is parameterized using the second component in the output of EGNN (see Section 3) followed by a decoder (Dec):
g(zt, t) = Dec(EGNN h(xt,h ′ t)), h ′ t = concatenate(ht, t), (11)
where the concatenation is performed on each atom feature, and the decoder is a small neural network based on Satorras et al. (2021b). This parameterization ensures that the energy function
E(zt, c, t) is invariant to orthogonal transformations, and thus the distribution of generated samples is also invariant according to Theorem 2.
Similarly, the time-dependent multi-label classifier is parameterized by EGNN as
m(zt, t) = σ(Dec(EGNN h(xt,h ′ t))), h ′ t = concatenate(ht, t).
The multi-label classifier has the same backbone to the property prediction model in Eq. (11), except that the decoder outputs a vector of dimension L, and the sigmoid function σ is adopted for multilabel classification. Similarly to Eq. (11), the EGNN in the multi-label classifier guarantees the invariance of the distribution of generated samples according to Theorem 2.
E.2 TRAINING OBJECTIVES OF ENERGY FUNCTIONS
Time-dependent property prediction model. Since the quantum property is a scalar, we train the time-dependent property prediction model g(zt, t) using the ℓ1 loss
EtEq(c,z0,zt)|g(zt, t)− c|, where t is uniformly sampled from [0, T ].
Time-dependent multi-label classifier. Since the fingerprint is a bit map, predicting it can be viewed as a multi-label classification task. Thus, we use a time dependent multi-label classifier m(zt, t), and train it using the binary cross entropy loss
EtEq(c,x0,xt) L∑ l=1 cl logml(zt, t) + (1− cl) log(1−ml(zt, t)),
where t is uniformly sampled from [0, T ], and ml(zt, t) is the l-th component of m(zt, t).
F EXPERIMENTAL DETAILS
F.1 HOW TO GENERATE MOLECULES TARGETED TO MULTIPLE QUANTUM PROPERTIES
When we want to generate molecules with K quantum properties c = (c1, c2, . . . , cK), we combine energy functions for single properties linearly as E(zt, c, t) = ∑K k=1Ek(zt, ck, t), where Ek(zt, ck, t) = sk|gk(zt, t) − ck|2 is the energy function for the k-th property, sk is the scaling factor and gk(zt, t) is the time-dependent property prediction model for the k-th property. Then we use the gradient of E(zt, c, t) to guide the reverse SDE as described in Eq. (6).
F.2 THE “U-BOUND” AND “#ATOMS” BASELINES
The “U-bound” and “#Atoms” baselines are from Hoogeboom et al. (2022). The “U-bound” baseline shuffles the labels in Db and then calculate the loss of ϕc on it, which can be regarded as an upper bound of the MAE. The “#Atoms” baseline predicts a quantum property c using only the number of atoms in a molecule.
F.3 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
For the noise prediction network, we use the same setting with EDM (Hoogeboom et al., 2022) for a fair comparison, where the models is trained ∼2000 epochs with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
The EGNN used in the energy function has 192 hidden features and 7 layers. We train 2000 epochs with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
During evaluation, we need to generate a set of molecules. Following the EDM (Hoogeboom et al., 2022), we firstly sample the number of atoms in a molecule M ∼ p(M) and the property value c ∼ p(c|M) (or c1 ∼ p(c1|M), c2 ∼ p(c2|M), . . . , cK ∼ p(cK |M) for multiple properties). Here p(M) is the distribution of molecule sizes on training data, and p(c|M) is the distribution of the property on training data. Then we generate a molecule given M, c.
F.4 GENERATING MOLECULES WITH TARGET STRUCTURES
Computation of Tanimoto similarity. Let Sg be the set of bits that are set to 1 in the fingerprint of a generated molecule, and St be the set of bits that are set to 1 in the target structure. The Tanimoto similarity is defined as |Sg ∩ St|/|Sg ∪ St|, where | · | denotes the number of elements in a set. Experimental details on QM9. For the backbone of the noise prediction network, we use a threelayer MLP with 768, 512 and 192 hidden nodes as embedding to encode the fingerprint and add the output of it with the embedding of atom features h, which is then fed into the following EGNN. The EGNN has 256 hidden features and 9 layers. We train it 1500 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The energy function is trained with 1750 epoch with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. Its EGNN has 192 hidden features and 7 layers. For the baseline cG-SchNet, we reproduce it using the public code. Since the default data split in cG-SchNet is different with ours, we train the cG-SchNet under the same data split with ours for a fair comparison. We report results at 200 epochs (there is no gain on similarity metric after 150 epochs). We evaluate the Tanimoto similarity on the whole test set.
Experimental details on GEOM-Drug. We use the same data split of GEOM-Drug with Hoogeboom et al. (2022), where training, validation and test set include 554K, 70K and 70K samples respectively. We train the noise prediction network and the energy function on the training set. For the noise prediction network, we use the recommended hyperparameters of EDM (Hoogeboom et al., 2022), where the EGNN has 256 hidden features and 4 layers, and the other part is the same as that in QM9. We train 10 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The backbone of the energy function is the same as that in QM9 and we train the energy function for 14 epoch. We evaluate the Tanimoto similarity on randomly selected 10K molecules from the test set.
G ADDITIONAL RESULTS
G.1 ABLATION STUDY ON NOISE PREDICTION NETWORKS AND ENERGY GUIDANCE
We perform an ablation study on the conditioning, i.e., use conditional or unconditional noise prediction networks, and the energy guidance. When neither the conditioning nor the energy guidance is adopted, a single unconditional model can’t perform conditional generation, and therefore we only report its atom stability and the molecule stability. As shown in Table 5, Table 6 and Table 7, the conditional noise prediction network improves the MAE compared to the unconditional one, and the energy guidance improves the MAE compared to a sole conditional model. Both the conditioning and the energy guidance do not affect the atom stability and the molecule stability much.
G.2 RESULTS ON NOVELTY, ATOM STABILITY AND MOLECULE STABILITY
For completeness, we also report novelty, atom stability and molecule stability on 10K generated molecules. Below we briefly introduce these metrics.
• Novelty (Simonovsky & Komodakis, 2018) is the proportion of generated molecules that do not appear in the training set. Specifically, let G be the set of generated molecules, the novelty is calculated as 1 − |G∩Db||G| . Note that the novelty is evaluated on Db, since the reproduced conditional EDM and our method are trained on Db. This leads to an inflated value compared to the one evaluated on the whole dataset (Hoogeboom et al., 2022).
• Atom stability (AS) (Hoogeboom et al., 2022) is the proportion of atoms that have the right valency. Molecule stability (MS) (Hoogeboom et al., 2022) is the proportion of generated molecules where all atoms are stable. We use the official implementation for the two metrics from the EDM paper (Hoogeboom et al., 2022).
As shown in Table 8 and Table 9, conditional EDM and EEGSDE with a small scaling factor have a slightly better stability, and EEGSDE with a large scaling factor has a slightly better novelty in general. The additional energy changes the distribution of generated molecules, which improves the novelty of generated molecules in general. Since there is a tradeoff between novelty and stability (see the caption of Table 5 in the EDM paper (Hoogeboom et al., 2022)), a slight decrease on the stability is possible when the scaling factor is large.
G.3 VISUALIZATION OF THE EFFECT OF THE SCALING FACTOR
We visualize the effect of the scaling factor in Figure 4, where the generated structures align better as the scaling factor grows.
G.4 REPRODUCE
We compare our reproduced results and the original results of conditional EDM (Hoogeboom et al., 2022) in Table 10. The results are consistent.
G.5 GENERATING MOLECULES WITH TARGET STRUCTURES ON GEOM-DRUG
We plot generated molecules on GEOM-Drug in Figure 5, and it can be observed that the atom types of generated molecules with EEGSDE often match the target better than conditional EDM.
G.6 THE DISTRIBUTIONS OF PROPERTIES
We plot the distribution of the properties of the training set, as well as the distribution of the properties of generated molecules (calculated by the Gaussian software). As shown in Figure 6, these distributions match well. | 1. What is the focus of the paper regarding 3D molecule generation?
2. What are the strengths of the proposed diffusion model, particularly in incorporating various energy functions?
3. What are the weaknesses of the paper, especially in comparing with prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new diffusion model for generating 3D molecules using the guidance energy function. In order to incorporate various energy functions into the proposed framework, the authors theoretically demonstrate that those functions are invariant to transformations, which is an essential feature for 3D molecule representations. The proposed model outperforms its predecessor by a wide margin. Additionally, the representation makes it easier to comprehend how the model performs as intended.
Strengths And Weaknesses
[Strength]
Incorporating energy functions is well supported by the theories.
In the task of generating molecules with desired quantum properties, the proposed framework significantly improves its baseline, EDM.
The paper is easy to read and well organized.
The visualizations help to understand how the model behaves in different settings.
[Weaknesses]
Although their approach is novel, the core model relies on the existing model EDM with just adding energy functions.
EDM seems to be generating more stable molecules. Insights behind this would be helpful.
Why not compare the model by Gebauer et al. 2022 in the task of generating molecules with target structures? This task seems to be motivated by that paper.
The authors, however, include EDM as a baseline. How did you create a molecule with target structures using EDM? This comparison appears to be unfair because there are no explicit terms for target structures in EDM.
Clarity, Quality, Novelty And Reproducibility
The paper is easy to read and well organized.
The energy function is novel, while the other parts appear to be adaptations of existing ones. |
ICLR | Title
Equivariant Energy-Guided SDE for Inverse Molecular Design
Abstract
Inverse molecular design is critical in material science and drug discovery, where the generated molecules should satisfy certain desirable properties. In this paper, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. Empirically, under the guidance of designed energy functions, EEGSDE significantly improves the baseline on QM9, in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
1 INTRODUCTION
The discovery of new molecules with desired properties is critical in many fields, such as the drug and material design (Hajduk & Greer, 2007; Mandal et al., 2009; Kang et al., 2006; Pyzer-Knapp et al., 2015). However, brute-force search in the overwhelming molecular space is extremely challenging. Recently, inverse molecular design (Zunger, 2018) provides an efficient way to explore the molecular space, which directly predicts promising molecules that exhibit desired properties.
A natural way of inverse molecular design is to train a conditional generative model (SanchezLengeling & Aspuru-Guzik, 2018). Formally, it learns a distribution of molecules conditioned on certain properties from data, and new molecules are predicted by sampling from the distribution with the condition set to desired properties. Among them, equivariant diffusion models (EDM) (Hoogeboom et al., 2022) leverage the current state-of-art diffusion models (Ho et al., 2020), which involves a forward process to perturb data and a reverse process to generate 3D molecules conditionally or unconditionally. While EDM generates stable and valid 3D molecules, we argue that a single conditional generative model is insufficient for generating accurate molecules that exhibit desired properties (see Table 1 and Table 3 for an empirical verification).
In this work, we propose equivariant energy-guided stochastic differential equations (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE formalizes the generation process as an equivariant stochastic differential equation, and plugs in energy functions to improve the controllability of generation. Formally, we show that EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations.
We apply EEGSDE to various applications by carefully designing task-specific energy functions. When targeted to quantum properties, EEGSDE is able to generate more accurate molecules than
∗Equal contribution. †Correspondence to: C. Li and J. Zhu.
EDM, e.g., reducing the mean absolute error by more than 30% on the dipole moment property. When targeted to specific molecular structures, EEGSDE better capture the structure information in molecules than EDM, e.g, improving the similarity to target structures by more than 10%. Furthermore, EEGSDE is able to generate molecules targeted to multiple properties by combining the corresponding energy functions linearly. These demonstrate that our EEGSDE enables a flexible and controllable generation of molecules, providing a smart way to explore the chemical space.
2 RELATED WORK
Diffusion models are initially proposed by Sohl-Dickstein et al. (2015). Recently, they are better understood in theory by connecting it to score matching and stochastic differential equations (SDE) (Ho et al., 2020; Song et al., 2020). After that, diffusion models have shown strong empirical performance in many applications Dhariwal & Nichol (2021); Ramesh et al. (2022); Chen et al. (2020); Kong et al. (2020). There are also variants proposed to improve or accelerate diffusion models (Nichol & Dhariwal, 2021; Vahdat et al., 2021; Dockhorn et al., 2021; Bao et al., 2022b;a; Salimans & Ho, 2022; Lu et al., 2022).
Guidance is a technique to control the generation process of diffusion models. Initially, Song et al. (2020); Dhariwal & Nichol (2021) use classifier guidance to generate samples belonging to a class. Then, the guidance is extended to CLIP (Radford et al., 2021) for text to image generation, and semantic-aware energy (Zhao et al., 2022) for image-to-image translation. Prior guidance methods focus on image data, and are nontrivial to apply to molecules, since they do not consider the geometric symmetry. In contrast, our work proposes a general guidance framework for 3D molecules, where an invariant energy function is employed to leverage the geometric symmetry of molecules.
Molecule generation. Several works attempt to model molecules as 3D objects via deep generative models Nesterov et al. (2020); Gebauer et al. (2019); Satorras et al. (2021a); Hoffmann & Noé (2019); Hoogeboom et al. (2022). Among them, the most relevant one is the equivariant diffusion model (EDM) (Hoogeboom et al., 2022), which generates molecules in an iterative denoising manner. Benefiting from recent advances of diffusion models, EDM is stable to train and is able to generate high quality molecules. We provide a formal description of EDM in Section 3. Some other methods generate simplified representations of molecules, such as 1D SMILES strings (Weininger, 1988) and 2D graphs of molecules. These include variational autoencoders (Kusner et al., 2017; Dai et al., 2018; Jin et al., 2018; Simonovsky & Komodakis, 2018; Liu et al., 2018), normalizing flows (Madhawa et al., 2019; Zang & Wang, 2020; Luo et al., 2021), generative adversarial networks (Bian et al., 2019; Assouel et al., 2018), and autoregressive models (Popova et al., 2019; Flam-Shepherd et al., 2021). There are also methods on generating torsion angles in molecules. For
instance, Torsional Diffusion (Jing et al., 2022) employs the SDE formulation of diffusion models to model torsion angles in a given 2D molecular graph for the conformation generation task.
Inverse molecular design. Generative models have been applied to inverse molecular design. For example, conditional autoregressive models (Gebauer et al., 2022) and EDM (Hoogeboom et al., 2022) directly generate 3D molecules with desired quantum properties. Gebauer et al. (2019) also finetune pretrained generative models on a biased subset to generate 3D molecules with small HOMO-LUMO gaps. In contrast to these conditional generative models, our work further proposes a guidance method, a flexible way to control the generation process of molecules. Some other methods apply optimization methods to search molecules with desired properties, such as reinforcement learning (Zhou et al., 2019; You et al., 2018) and genetic algorithms (Jensen, 2019; Nigam et al., 2019). These optimization methods generally consider the 1D SMILES strings or 2D graphs of molecules, and the 3D information is not provided.
3 BACKGROUND
3D representation of molecules. Suppose a molecule has M atoms and let xi ∈ Rn (n = 3 in general) be the coordinate of the ith atom. The collection of coordinates x = (x1, . . . ,xM ) ∈ RMn determines the conformation of the molecule. In addition to the coordinate, each atom is also associated with an atom feature, e.g., the atom type. We use hi ∈ Rd to represent the atom feature of the ith atom, and use h = (h1, . . . ,hM ) ∈ RMd to represent the collection of atom features in a molecule. We use a tuple z = (x,h) to represent a molecule, which contains both the 3D geometry information and the atom feature information.
Equivariance and invariance. Suppose R is a transformation. A distribution p(x,h) is said to be invariant to R, if p(x,h) = p(Rx,h) holds for all x and h. Here Rx = (Rx1, . . . ,RxM ) is applied to each coordinate. A function (ax,ah) = f(x,h) that have two components ax,ah in its output is said to be equivariant to R, if f(Rx,h) = (Rax,ah) holds for all x and h. A function f(x,h) is said to be invariant to R, if f(Rx,h) = f(x,h) holds for all x and h.
Zero CoM subspace. It has been shown that the invariance to translational and rotational transformations is an important factor for the success of 3D molecule modeling (Köhler et al., 2020; Xu et al., 2022). However, the translational invariance is impossible for a distribution in the full space RMn (Satorras et al., 2021a). Nevertheless, we can view two collections of coordinates x and y as equivalent if x can be translated from y, since the translation doesn’t change the identity of a molecule. Such an equivalence relation partitions the whole space RMn into disjoint equivalence classes. Indeed, all elements in the same equivalence classes represent the same conformation, and we can use the element with zero center of mass (CoM), i.e., 1M ∑M i=1 x
i = 0, as the specific representation. These elements collectively form the zero CoM linear subspace X (Xu et al., 2022; Hoogeboom et al., 2022), and the rest of the paper always uses elements in X to represent conformations.
Equivariant graph neural network. Satorras et al. (2021b) propose equivariant graph neural networks (EGNNs), which incorporate the equivariance inductive bias into neural networks. Specifically, (ax,ah) = EGNN(x,h) is a composition of L equivariant convolutional layers. The l-th layer takes the tuple (xl,hl) as the input and outputs an updated version (xl+1,hl+1), as follows:
mij = Φm(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θm), w ij = Φw(m ij ;θw), h i l+1 = Φh(h i l, ∑ j ̸=i wijmij ;θh),
xil+1 = x i l + ∑ j ̸=i
xil − x j l
∥xil − x j l ∥2 + 1
Φx(h i l,h j l , ∥x i l − x j l ∥ 2 2, e ij ;θx),
where Φm,Φw,Φh,Φx are parameterized by fully connected neural networks with parameters θm,θw,θh,θx respectively, and eij are optional feature attributes. We can verify that these layers are equivariant to orthogonal transformations, which include rotational transformations as special cases. As their composition, the EGNN is also equivariant to orthogonal transformations. Furthermore, let EGNNh(x,h) = ah, i.e., the second component in the output of the EGNN. Then EGNNh(x,h) is invariant to orthogonal transformations.
Equivariant diffusion models (EDM) (Hoogeboom et al., 2022) are a variant of diffusion models for molecule data. EDMs gradually inject noise to the molecule z = (x,h) via a forward process
q(z1:N |z0)= N∏ n=1 q(zn|zn−1), q(zn|zn−1) = NX(xn| √ αnxn−1, βn)N (hn| √ αnhn−1, βn), (1)
where αn and βn represent the noise schedule and satisfy αn + βn = 1, and NX represent the Gaussian distribution in the zero CoM subspace X (see its formal definition in Appendix A.2). Let αn = α1α2 · · ·αn, βn = 1− αn and β̃n = βnβn−1/βn. To generate samples, the forward process is reversed using a Markov chain:
p(z0:N )=p(zN ) N∏ n=1 p(zn−1|zn), p(zn−1|zn)=NX(xn−1|µxn(zn), β̃n)N(hn−1|µhn(zn), β̃n). (2)
Here p(zN ) = NX(xN |0, 1)N (hN |0, 1). The mean µn(zn) = (µxn(zn),µhn(zn)) is parameterized by a noise prediction network ϵθ(zn, n), and is trained using a MSE loss, as follows:
µn(zn) = 1√ αn (zn − βn√ βn ϵθ(zn, n)), min θ EnEq(z0,zn)w(t)∥ϵθ(zn, n)− ϵn∥ 2,
where ϵn = zn− √ αnz0√ βn is the standard Gaussian noise injected to z0 and w(t) is the weight term. Hoogeboom et al. (2022) show that the distribution of generated samples p(z0) is invariant to rotational transformations if the noise prediction network is equivariant to orthogonal transformations. In Section 4.2, we extend this proposition to the SDE formulation of molecular diffusion modelling.
Hoogeboom et al. (2022) also present a conditional version of EDM for inverse molecular design by adding an extra input of the condition c to the noise prediction network as ϵθ(zn, c, n).
4 EQUIVARIANT ENERGY-GUIDED SDE
In this part, we introduce our equivariant energy-guided SDE (EEGSDE), as illustrated in Figure 1. EEGSDE is based on the SDE formulation of molecular diffusion modeling, which is described in Section 4.1 and Section 4.2. Then, we formally present our EEGSDE that incorporates an energy function to guide the molecular generation in Section 4.3. We provide derivations in Appendix A.
4.1 SDE IN THE PRODUCT SPACE
Recall that a molecule is represented as a tuple z = (x,h), where x = (x1, . . . ,xM ) ∈ X represents the conformation and h = (h1, . . . ,hM ) ∈ RMd represents atom features. Here X = {x ∈ RMn : 1M ∑M i=1 x
i = 0} is the zero CoM subspace mentioned in Section 3, and d is the feature dimension. We first introduce a continuous-time diffusion process {zt}0≤t≤T in the product spaceX×RMd, which gradually adds noise to x and h. This can be described by the forward SDE:
dz = f(t)zdt+ g(t)d(wx,wh), z0 ∼ q(z0), (3)
where f(t) and g(t) are two scalar functions, wx and wh are independent standard Wiener processes in X and RMd respectively, and the SDE starts from the data distribution q(z0). Note that wx can be constructed by subtracting the CoM of a standard Wiener process w in RMn, i.e., wx = w−w, where w = 1M ∑M i=1 w
i is the CoM of w = (w1, . . . ,wM ). It can be shown that the SDE has a linear Gaussian transition kernel q(zt|zs) = q(xt|xs)q(ht|hs) from xs to xt, where 0 ≤ s < t ≤ T . Specifically, there exists two scalars αt|s and βt|s, s.t., q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here NX denotes the Gaussian distribution in the subspace X , and see Appendix A.2 for its formal definition. Indeed, the forward process of EDM in Eq. (1) is a discretization of the forward SDE in Eq. (3).
To generate molecules, we reverse Eq. (3) from T to 0. Such a time reversal forms another a SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (4)
Here qt(z) is the marginal distribution of zt, ∇x log qt(z) is the gradient of log qt(z) w.r.t. x1, ∇x log qt(z) = 1M ∑M i=1∇xi log qt(z) is the CoM of ∇x log qt(z), dt is the infinitesimal negative timestep, w̃x and w̃h are independent reverse-time standard Wiener processes in X and RMd respectively, and ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise injected to z0. Compared to the
original SDE introduced by Song et al. (2020), our reverse SDE in Eq. (4) additionally subtracts the CoM of∇x log qt(z). This ensures xt always stays in the zero CoM subspace as time flows back. To sample from the reverse SDE in Eq. (4), we use a noise prediction network ϵθ(zt, t) to estimate Eq(z0|zt)ϵt, through minimizing the MSE loss minθ EtEq(z0,zt)w(t)∥ϵθ(zt, t) − ϵt∥2, where t is uniformly sampled from [0, T ], and w(t) controls the weight of the loss term at time t. Note that the noise ϵt is in the product space X × RMd, so we subtract the CoM of the predicted noise of xt to ensure ϵθ(zt, t) is also in the product space.
Substituting ϵθ(zt, t) into Eq. (4), we get an approximate reverse-time SDE parameterized by θ:
dz = [f(t)z + g(t)2√ βt|0 ϵθ(z, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (5)
where pT (zT ) = NX(xT |0, 1)N (hT |0, 1) is a Gaussian prior in the product space that approximates qT (zT ). We define pθ(z0) as the marginal distribution of Eq. (5) at time t = 0, which is the distribution of our generated samples. Similarly to the forward process, the reverse process of EDM in Eq. (2) is a discretization of the reverse SDE in Eq. (5).
4.2 EQUIVARIANT SDE
To leverage the geometric symmetry in 3D molecular conformation, pθ(z0) should be invariant to translational and rotational transformations. As mentioned in Section 3, the translational invariance of pθ(z0) is already satisfied by considering the zero CoM subspace. The rotational invariance can be satisfied if the noise prediction network is equivariant to orthogonal transformations, as summarized in the following theorem: Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
As mentioned in Section 3, the EGNN satisfies the equivariance constraint, and we parameterize ϵθ(zt, t) using an EGNN following Hoogeboom et al. (2022). See details in Appendix D.
4.3 EQUIVARIANT ENERGY-GUIDED SDE
Now we describe equivariant energy-guided SDE (EEGSDE), which guides the generated molecules of Eq. (5) towards desired properties c by leveraging a time-dependent energy function E(z, c, t):
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t))︸ ︷︷ ︸ energy gradient taken in the product space )]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ), (6)
1While qt(z) is defined in X ×RMd, its domain can be extended to RMn ×RMd and the gradient is valid. See Remark 1 in Appendix A.2 for details.
which defines a distribution pθ(z0|c) conditioned on the property c. Here the CoM ∇xE(z, c, t) of the gradient is subtracted to keep the SDE in the product space, which ensures the translational invariance of pθ(z0|c). Besides, the rotational invariance is satisfied by using energy invariant to orthogonal transformations, as summarized in the following theorem:
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Note that we can also use a conditional model ϵθ(z, c, t) in Eq. (6). See Appendix C for details. To sample from pθ(z0|c), various solvers can be used for Eq. (6), such as the Euler-Maruyama method (Song et al., 2020) and the Analytic-DPM sampler (Bao et al., 2022b;a). We present the Euler-Maruyama method as an example in Algorithm 1 at Appendix B.
4.4 HOW TO DESIGN THE ENERGY FUNCTION
Our EEGSDE is a general framework, which can be applied to various applications by specifying different energy functions E(z, c, t). For example, we can design the energy function according to consistency between the molecule z and the property c, where a low energy represents a well consistency. As the generation process in Eq. (6) proceeds, the gradient of the energy function encourages generated molecules to have a low energy, and consequently a well consistency. Thus, we can expect the generated molecule z aligns well with the property c. In the rest of the paper, we specify the choice of energy functions, and show these energies improve controllable molecule generation targeted to quantum properties, molecular structures, and even a combination of them.
Remark. The term “energy” in this paper refers to a general notion in statistical machine learning, which is a scalar function that captures dependencies between input variables (LeCun et al., 2006). Thus, the “energy” in this paper can be set to a MSE loss when we want to capture how the molecule align with the property (as done in Section 5). Also, the “energy” in this paper does not exclude potential energy or free energy in chemistry, and they might be applicable when we want to generate molecules with small potential energy or free energy.
5 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
Let c ∈ R be a certain quantum property. To generate molecules with the desired property, we set the energy function as the squared error between the predicted property and the desired property E(zt, c, t) = s|g(zt, t)− c|2, where g(zt, t) is a time-dependent property prediction model, and s is the scaling factor controlling the strength of the guidance. Specifically, g(zt, t) can be parameterized by equivariant models such as EGNN (Satorras et al., 2021b), SE3-Transformer (Fuchs et al., 2020) and DimeNet (Klicpera et al., 2020) to ensure the invariance of E(zt, c, t), as long as they perform well in the task of property prediction. In this paper we consider EGNN. We provide details on parameterization and the training objective in Appendix E. We can also generate molecules targeted to multiple quantum properties by combining energy functions linearly (see details in Appendix F.1).
5.1 SETUP
We evaluate on QM9 (Ramakrishnan et al., 2014), which contains quantum properties and coordinates of ∼130k molecules with up to nine heavy atoms from (C, N, O, F). Following EDM, we split QM9 into training, validation and test sets, which include 100K, 18K and 13K samples respectively. The training set is further divided into two non-overlapping halves Da, Db equally. The noise prediction network and the time-dependent property prediction model of the energy function are trained on Db separately. By default, EEGSDE uses a conditional noise prediction network ϵθ(z, c, t) in Eq. (6), since we find it generate more accurate molecules than using an unconditional one (see Appendix G.1 for an ablation study). See more details in Appendix F.3.
Evaluation metric: Following EDM, we use the mean absolute error (MAE) to evaluate how generated molecules align with the desired property. Specifically, we train another property prediction model ϕp (Satorras et al., 2021b) onDa, and the MAE is calculated as 1K ∑K i=1 |ϕp(zi)−ci|, where zi is a generated molecule, ci is its desired property. We generate K=10,000 samples for evaluation with ϕp. For fairness, the property prediction model ϕp is different from g(zt, t) used in the energy
function, and they are trained on the two non-overlapping training subsetsDa, Db respectively. This ensures no information leak occurs when evaluating our EEGSDE. To further verify the effectiveness of EEGSDE, we also calculate the MAE without relying on a neural network. Specifically, we use the Gaussian software (which calculates properties according to theories of quantum chemistry without using neural networks) to calculate the properties of 100 generated molecules. For completeness, we also report the novelty, the atom stability and the molecule stability following Hoogeboom et al. (2022) in Appendix G.2, although they are not our main focus.
Baseline: The most direct baseline is conditional EDM, which only adopts a conditional noise prediction network. We also compare two additional baselines “U-bound” and “#Atoms” from Hoogeboom et al. (2022) (see Appendix F.2 for details).
5.2 RESULTS
Following Hoogeboom et al. (2022), we consider six quantum properties in QM9: polarizability α, highest occupied molecular orbital energy εHOMO, lowest unoccupied molecular orbital energy εLUMO, HOMO-LUMO gap ∆ε, dipole moment µ and heat capacity Cv . Firstly, we generate
molecules targeted to one of these six properties. As shown in Table 1, with the energy guidance, our EEGSDE has a significantly better MAE than the conditional EDM on all properties. Remarkably, with a proper scaling factor s, the MAE of EEGSDE is reduced by more than 25% compared to conditional EDM on properties ∆ε, εLUMO, and more than 30% on µ. What’s more, as shown in Table 2, our EEGSDE still has better MAE under the evaluation by the Gaussian software, which further verifies the effectiveness of our EEGSDE. We further generate molecules targeted to multiple quantum properties by combining energy functions linearly. As shown in Table 3, our EEGSDE still has a significantly better MAE than the conditional EDM.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the desired properties than molecules generated by the conditional EDM baseline (for both the singleproperty and multiple-properties cases). As a consequence, EEGSDE is able to explore the chemical space in a guided way to generate promising molecules for downstream applications such as the virtual screening, which may benefit drug and material discovery.
6 GENERATING MOLECULES WITH TARGET STRUCTURES
Following Gebauer et al. (2022), we use the molecular fingerprint to encode the structure information of a molecule. The molecular fingerprint c = (c1, . . . , cL) is a series of bits that capture the presence or absence of substructures in the molecule. Specifically, a substructure is mapped to a specific position l in the bitmap, and the corresponding bit cl will be 1 if the substructure exists in the molecule and will be 0 otherwise. To generate molecules with a specific structure (encoded by the fingerprint c), we set the energy function as the squared errorE(zt, c, t) = s∥m(zt, t)−c∥2 between a time-dependent multi-label classifier m(zt, t) and c. Here s is the scaling factor, and m(zt, t) is trained with binary cross entropy loss to predict the fingerprint as detailed in Appendix E.2. Note that the choice of the energy function is flexible and can be different to the training loss of m(zt, t). In initial experiments, we also try binary cross entropy loss for the energy function, but we find it causes the generation process unstable. The multi-label classifierm(zt, t) is parameterized in a similar way to the property prediction model g(zt, t) in Section 5, and we present details in Appendix E.1.
6.1 SETUP
We evaluate on QM9 and GEOM-Drug. We train our method (including the noise prediction network and the multi-label classifier) on the whole training set. By default, we use a conditional noise prediction network ϵθ(z, c, t) in Eq. (6) for a better performance. See more details in Appendix F.4.
Evaluation metric: To measure how the structure of a generated molecule aligns with the target one, we use the Tanimoto similarity (Gebauer et al., 2022), which captures similarity between structures by comparing their fingerprints.
Baseline: The most direct baseline is conditional EDM (Hoogeboom et al., 2022), which only adopts a conditional noise prediction network without the guidance of an energy model. We also consider cG-SchNet (Gebauer et al., 2022), which generates molecules in an autoregressive manner.
6.2 RESULTS
As shown in Table 4, EEGSDE significantly improves the similarity between target structures and generated structures compared to conditional EDM and cG-SchNet on QM9. Also note in a proper range, a larger scaling factor results in a better similarity, and EEGSDE with s=1 improves the sim-
ilarity by more than 10% compared to conditional EDM. In Figure 2, we plot generated molecules of conditional EDM and EEGSDE (s=1) targeted to specific structures, where our EEGSDE aligns better with them. We further visualize the effect of the scaling factor in Appendix G.3, where the generated structures align better as the scaling factor grows. These results demonstrate that our EEGSDE captures the structure information in molecules well.
We also perform experiments on the more challenging GEOM-Drug (Axelrod & Gomez-Bombarelli, 2022) dataset, and we train the conditional EDM baseline following the default setting of Hoogeboom et al. (2022). As shown in Table 4, we find the conditional EDM baseline has a similarity of 0.165, which is much lower than the value on QM9. We hypothesize this is because molecules in GEOM-Drug has much more atoms than QM9 with a more complex structure, and the default setting in Hoogeboom et al. (2022) is suboptimal. For example, the conditional EDM on GEOM-Drug has a smaller number of parameters than the conditional EDM on QM9 (15M v.s. 26M), which is insufficient to capture the structure information. Nevertheless, our EEGSDE still improves the similarity by ∼%17. We provide generated molecules on GEOM-Drug in Appendix G.5. Finally, we demonstrate that our EEGSDE is a flexible framework to generate molecules targeted to multiple properties, which is often the practical case. We additionally target to the quantum property α (polarizability) on QM9 by combining the energy function for structures in this section and the energy function for quantum properties in Section 5. Here we choose α = 100 Bohr3, which is a relatively large value, and we expect it to encourage less isometrically shaped structures. As shown in Figure 3, the generated molecule aligns better with the target structure as the scaling factor s1 grows, and meanwhile a ring substructure in the generated molecule vanishes as the scaling factor for polarizability s2 grows, leading to a less isometrically shaped structure, which is as expected.
Conclusion: These results suggest that molecules generated by our EEGSDE align better with the target structure than molecules generated by the conditional EDM baseline. Besides, EEGSDE can generate molecules targeted to both specific structures and desired quantum properties by combining energy functions linearly. As a result, EEGSDE may benefit practical cases in molecular design when multiple properties should be considered at the same time.
7 CONCLUSION
This work presents equivariant energy-guided SDE (EEGSDE), a flexible framework for controllable 3D molecule generation under the guidance of an energy function in diffusion models. EEGSDE naturally exploits the geometric symmetry in 3D molecular conformation, as long as the energy function is invariant to orthogonal transformations. EEGSDE significantly improves the conditional EDM baseline in inverse molecular design targeted to quantum properties and molecular structures. Furthermore, EEGSDE is able to generate molecules with multiple target properties by combining the corresponding energy functions linearly.
ACKNOWLEDGMENTS
We thank Han Guo and Zhen Jia for their help with their expertise in chemistry. This work was supported by NSF of China Projects (Nos. 62061136001, 61620106010, 62076145, U19B2034, U1811461, U19A2081, 6197222); Beijing Outstanding Young Scientist Program NO. BJJWZYJH012019100020098; a grant from Tsinghua Institute for Guo Qiang; the High Performance Computing Center, Tsinghua University; the Fundamental Research Funds for the Central Universities, and the Research Funds of Renmin University of China (22XNKJ13). J.Z was also supported by the XPlorer Prize. C. Li was also sponsored by Beijing Nova Program.
ETHICS STATEMENT
Inverse molecular design is critical in fields like material science and drug discovery. Our EEGSDE is a flexible framework for inverse molecular design and thus might benefit these fields. Currently the negative consequences are not obvious.
REPRODUCIBILITY STATEMENT
Our code is included in the supplementary material. The implementation of our experiment is described in Section 5 and Section 6. Further details such as the hyperparameters of training and model backbones are provided in Appendix F. We provide complete proofs and derivations of all theoretical results in Appendix A.
A DERIVATIONS
A.1 ZERO COM SUBSPACE
The zero CoM subspace X = {x ∈ RMn : ∑M i=1 x
i = 0} is a (M − 1)n dimensional subspace of RMn. Therefore, there exists an isometric isomorphism ϕ from R(M−1)n to X , i.e., ϕ is a linear bijection from R(M−1)n to X , and ∥ϕ(x̂)∥2 = ∥x̂∥2 for all x̂ ∈ R(M−1)n. We use Aϕ ∈ RMn×(M−1)n represent the matrix corresponding to ϕ, so we have ϕ(x̂) = Aϕx̂. An important property of the isometric isomorphism is that AϕA⊤ϕx = x − x for all x ∈ RMn. We show the proof as following. Proposition 1. Suppose ϕ is an isometric isomorphism from R(M−1)n to X , and let Aϕ ∈ RMn×(M−1)n be the matrix corresponding to ϕ. Then we have AϕA⊤ϕx = x−x for all x ∈ RMn, where x = 1M ∑M i=1 x i.
Proof. We consider a new subspace of RMn, X⊥ = {x ∈ RMn : x ⊥ X}, i.e., the orthogonal component of X . We can verify that X⊥ = {x ∈ RMn : x1 = x2 = · · · = xM}, and X⊥ is n dimensional. Thus, there exists an isometric isomorphism ψ from Rn toX⊥. LetAψ ∈ RMn×n represent the matrix corresponding to ψ. Then we define λ(x̂, ŷ) = ϕ(x̂)+ψ(ŷ), where x̂ ∈ R(M−1)n and ŷ ∈ Rn. The image of λ is {x + y : x ∈ X,y ∈ X⊥} = RMn. Therefore, λ is a linear bijection from RMn to RMn, and the matrix corresponding to λ is Aλ = [Aϕ, Aψ]. Furthermore, λ is an isometric isomorphism, since ∥λ(x̂, ŷ)∥2 = ∥ϕ(x̂)∥2 + ∥ψ(ŷ)∥2 + 2 ⟨ϕ(x̂), ψ(ŷ)⟩ = ∥x̂∥2 + ∥ŷ∥2 = ∥(x̂⊤, ŷ⊤)⊤∥2. This means Aλ is orthogonal transformation. Therefore, AλA ⊤ λ = AϕA ⊤ ϕx + AψA ⊤ ψ = I and AϕA ⊤ ϕx + AψA ⊤ ψx = x. Since AϕA ⊤ ϕx ∈ X and AψA ⊤ ψx ∈ X⊤, we can conclude that AϕA⊤ϕx is the orthogonal projection of x to X , which is exactly x− x.
Since R(M−1)n and X are two intrinsically equivalent spaces, an equivalence of distributions in these two spaces can also be established, as shown in the following propositions. Proposition 2. Suppose x is a random vector distributed in X , and x̂ = ϕ−1(x) is its equivalent representation in R(M−1)n. If x ∼ q(x), then x̂ ∼ q̂(x̂), where q̂(x̂) = q(ϕ(x̂)). Proposition 3. Suppose x,y is are two random vectors distributed in X , and x̂ = ϕ−1(x), ŷ = ϕ−1(y) are their equivalent representations in R(M−1)n. If x|y ∼ q(x|y), then x̂|ŷ ∼ q̂(x̂|ŷ), where q̂(x̂|ŷ) = q(ϕ(x̂)|ϕ(ŷ))).
A.2 SDE IN THE PRODUCT SPACE
Definition 1 (Gaussian distributions in the zero CoM subspace). Suppose µ ∈ X . Let NX(x|µ, σ2) := (2πσ2)−(M−1)n/2 exp(− 12σ2 ∥x−µ∥ 2 2), which is the isotropic Gaussian distribution with mean µ and variance σ2 in the zero CoM subspace X .
Proposition 4 (Transition kernels of a SDE in the zero CoM subspace). Suppose dx = f(t)xdt+ g(t)dwx, x0 ∼ q(x0) is a SDE in the zero CoM subspace. Then the transition kernel from xs to xt (0 ≤ s < t ≤ T ) can be expressed as q(xt|xs) = NX(xt| √ αt|sxs, βt|s), where αt|s =
exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·).
Proof. Firstly, we map the process {xt}Tt=0 in the zero CoM subspace to the equivalent space R(M−1)n through the isometric isomorphism ϕ introduced in Appendix A.1. This produces a new process {x̂}Tt=0, where x̂t = ϕ−1(xt). By applying ϕ−1 to the SDE in the zero CoM subspace, we know x̂t = ϕ−1(xt) satisfies the following SDE in R(M−1)n:
dx̂ = f(t)x̂dt+ g(t)dŵ, x̂0 ∼ q̂(x̂0), (7) where ŵ is the standard Wiener process in R(M−1)n and q̂(x̂0) = q(ϕ(x̂0)). According to Song et al. (2020), the transition of Eq. (7) from x̂s to x̂t (s < t) can be expressed as q̂(x̂t|x̂s) = N (x̂t| √ αt|sx̂s, βt|sI), where αt|s = exp(2 ∫ t s f(τ)dτ) and βt|s = αt|s ∫ t s g(τ)2/ατ |sdτ are two scalars determined by f(·) and g(·). According to Proposition 3, we know the transition kernel q(xt|xs) from xs to xt satisfies
q(xt|xs) =q̂(ϕ−1(xt)|ϕ−1(xs)) = (2πβt|s)−(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt)−
√ αt|sϕ −1(xs)∥22)
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥ϕ−1(xt −
√ αt|sxs)∥22) // linearity of ϕ−1
=(2πβt|s) −(M−1)n/2 exp(− 1
2βt|s ∥xt −
√ αt|sxs∥22) // norm preserving of ϕ−1
=NX(xt| √ αt|sxs, βt|s).
Proposition 5 (Transition kernels of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then the transition kernel from zs to zt (0 ≤ s < t ≤ T ) can be expressed as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s). Here αt|s and βt|s are defined as in Proposition 4.
Proof. Since wx and wh are independent to each other, the transition kernel from zs to zt can be factorized as q(zt|zs) = q(xt|xs)q(ht|hs), where q(xt|xs) is the transition kernel of dx = f(t)xdt+ g(t)dwx and q(ht|hs) is the transition kernel of dh = f(t)hdt+ g(t)dwh. According to Proposition 4 and Song et al. (2020), we have q(xt|xs) = NX(xt| √ αt|sxs, βt|s) and q(ht|hs) = N (ht| √ αt|shs, βt|s).
Remark 1. The marginal distribution of zt is
qt(zt) = ∫ Z ∫ RMd q(z0)q(xt|x0)q(ht|h0)dh0λ(dx0),
where λ is the Lebesgue measure in the zero CoM subspace X . While qt(zt) is a distribution in X × RMd, it has a natural differentiable extension to RMn × RMd, since q(xt|x0) = NX(xt| √ αt|sxs, βt|s) has a differentiable extension to RMn according to Definition 1. Thus, we can take gradient of qt(zt) w.r.t. xt in the whole space RMn. Proposition 6 (Time reversal of the SDE in the product space). Suppose dz = f(t)zdt + g(t)d(wx,wh), z0 ∼ q(z0) is the SDE in the product space X × RMd, as introduced in Eq. (3). Then its time reversal satisfies the following reverse-time SDE, which can be represented by both the score function form and the noise prediction form:
dz =[f(t)z − g(t)2 (∇x log qt(z)−∇x log qt(z),∇h log qt(z))︸ ︷︷ ︸ score function form ]dt+ g(t)d(w̃x, w̃h),
=[f(t)z + g(t)2√ βt|0
Eq(z0|zt)ϵt︸ ︷︷ ︸ noise prediction form ]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ),
where w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and ϵt =
zt−√αt|0z0√ βt|0 is the standard Gaussian noise in X × RMd injected to z0.
Furthermore, we have∇x log qt(x)−∇x log qt(x) = − 1√ βt|0
Eq(x0|xt)ϵt, where ϵt = xt−√αt|0x0√
βt|0
is the standard Gaussian noise in the zero CoM subspace injected to x0.
Proof. Let ẑt = (x̂t,ht), where x̂t = ϕ−1(xt), introduced in the proof of Proposition 4. Then {ẑt}Tt=0 is a process in R(M−1)n × RMd determined by the following SDE
dẑ = f(t)ẑdt+ g(t)dŵ, ẑ0 ∼ q̂(ẑ0), (8)
where ŵ is the standard Wiener process in R(M−1)n × RMd and q̂(ẑ0) = q(ϕ(x̂0),h). According to Song et al. (2020), Eq. (8) has a time reversal:
dẑ = [f(t)ẑ − g(t)2∇ẑ log q̂t(ẑ)]dt+ g(t)dw̃, ẑT ∼ q̂T (ẑT ), (9)
where q̂t(z) is the marginal distribution of ẑt, which satisfies q̂t(ẑt) = qt(ϕ(x̂t),ht) according to Proposition 2, and w̃ is the reverse-time standard Wiener process in R(M−1)n × RMd. Then we apply the linear transformation T ẑ = (ϕ(x̂),h) to Eq. (9), which maps ẑt back to zt. This yields
dz = [f(t)z − g(t)2T (∇ẑ log q̂t(ẑ))]dt+ g(t)d(w̃x, w̃h), zT ∼ qT (zT ). (10)
Here w̃x and w̃h are reverse-time standard Wiener processes in X and RMd respectively, and T (∇ẑ log q̂t(ẑ)) can be expressed as (ϕ(∇x̂ log q̂t(ẑ)),∇h log q̂t(ẑ)). Since ∇x̂ log q̂t(ẑ) = A⊤ϕ∇x log qt(x,h) = A⊤ϕ∇x log qt(z), we have ϕ(∇x̂ log q̂t(ẑ)) = AϕA ⊤ ϕ∇x log qt(z), where Aϕ represents the matrix corresponding to ϕ. According to Proposition 1, we have ϕ(∇x̂ log q̂t(ẑ)) = ∇x log qt(z) − ∇x log qt(z). Besides, ∇h log q̂t(ẑ) = ∇h log qt(z). Thus, Eq. (10) can be written as
dz = [f(t)z − g(t)2(∇x log qt(z)−∇x log qt(z),∇h log qt(z))]dt+ g(t)d(w̃x, w̃h),
which is the score function form of the reverse-time SDE.
We can also write the score function in Eq. (9) as ∇ẑ log q̂t(ẑ) = Eq̂(ẑ0|ẑt)∇ẑ log q̂(ẑt|ẑ0) = − 1√
βt|0 Eq̂(ẑ0|ẑt)ϵ̂t, where ϵ̂t = ẑt−√αt|0ẑ0√ βt|0 is the standard Gaussian noise injected to ẑ0. With
this expression, we have T (∇ẑ log q̂t(ẑ)) = − 1√ βt|0 Eq̂(ẑ0|ẑt)T (ϵ̂t) = − 1√βt|0Eq(z0|zt)ϵt, where ϵt = zt−√αt|0z0√
βt|0 is the standard Gaussian noise in X ×RMd injected to z0. Thus, Eq. (10) can also
be written as
dz = [f(t)z + g(t)2√ βt|0 Eq(z0|zt)ϵt]dt+ g(t)d(w̃x, w̃h),
which is the noise prediction form of the reverse-time SDE.
A.3 EQUIVARIANCE
Theorem 1. Let (ϵxθ(zt, t), ϵhθ(zt, t)) = ϵθ(zt, t), where ϵxθ(zt, t) and ϵhθ(zt, t) are the predicted noise of xt and ht respectively. If for any orthogonal transformation R ∈ Rn×n, ϵθ(zt, t) is equivariant to R, i.e., ϵθ(Rxt,ht, t) = (Rϵxθ(xt,ht, t), ϵ h θ(xt,ht, t)), and pT (zT ) is invariant to R, i.e., pT (RxT ,hT ) = pT (xT ,hT ), then pθ(z0) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Let zθt = (xθt ,hθt ) (0 ≤ t ≤ T ) be the process determined by Eq. (5). Let yθt = Rx θ t and u θ t = (y θ t ,h θ t ). We use p u t (ut) and p z t (zt) to denote the distributions of uθt and z θ t respectively, and they satisfy p u t (yt,ht) = p z t (R −1yt,ht).
By applying the transformation Tz = (Rx,h) to Eq. (5), we know the new process {uθt }Tt=0 satisfies the following SDE:
du = Tdz = [f(t)u+ g(t)2√ βt|0 (Rϵxθ(z, t), ϵ h θ(z, t))]dt+ g(t)d(Rw̃x, w̃h)
=[f(t)u+ g(t)2√ βt|0 (ϵxθ(Rx,h, t), ϵ h θ(Rx,h, t))]dt+ g(t)d(Rw̃x, w̃h) // by equivariance =[f(t)u+ g(t)2√ βt|0 ϵθ(u, t)]dt+ g(t)d(w̃x, w̃h), // Winner process is isotropic
and its initial distribution is puT (uT ) = p u T (yT ,hT ) = p z T (R −1yT ,hT ) = pT (R −1yT ,hT ) = pT (yT ,hT ) = pT (uT ). Thus, the SDE of {uθt }Tt=0 is exactly the same to that of {zθt }Tt=0. This indicates that the distribution of uθ0 is the same to the distribution of z θ 0 , i.e., p u 0 (u0) = pz0 (u0) = pθ(u0). Also note that p u 0 (u0) = p z 0 (R −1y0,h0) = pθ(R −1y0,h0). Thus, pθ(u0) = pθ(R −1y0,h0), and consequently pθ(Rx0,h0) = pθ(x0,h0). This means pθ(z0) is invariant to any orthogonal transformation, which includes rotational transformations as special cases.
Theorem 2. Suppose the assumptions in Theorem 1 hold and E(z, c, t) is invariant to any orthogonal transformation R, i.e., E(Rx,h, c, t) = E(x,h, c, t). Then pθ(z0|c) is invariant to any rotational transformation.
Proof. Suppose R ∈ Rn×n is an orthogonal transformation. Taking gradient to both side of E(Rx,h, c, t) = E(x,h, c, t) w.r.t. x, we get
R⊤∇yE(y,h, c, t)|y=Rx = ∇xE(x,h, c, t).
Multiplying R to both sides, we get
∇yE(y,h, c, t)|y=Rx = R∇xE(x,h, c, t).
Let ϕ(z, c, t) = (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)). Then we have
ϕ(Rx,h, c, t) =(∇yE(y,h, c, t)−∇yE(y,h, c, t),∇hE(y,h, c, t))|y=Rx =(R∇xE(x,h, c, t)−R∇xE(x,h, c, t),∇hE(Rx,h, c, t)) =(R(∇xE(x,h, c, t)−∇xE(x,h, c, t)),∇hE(x,h, c, t)).
Thus, ϕ(z, c, t) is equivariant to R. Let ϵ̂θ(z, c, t) = ϵθ(z, t) + √ βt|0ϕ(z, c, t), which is a linear combination of two equivariant functions and is also equivariant to R.
Then, Eq. (6) can be written as
dz = [f(t)z + g(t)2√ βt|0 ϵ̂θ(z, c, t)]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
According to Theorem 1, we know its marginal distribution at time t = 0, i.e., pθ(z0|c), is invariant to any rotational transformation.
B SAMPLING
In Algorithm 1, we present the Euler-Maruyama method to sample from EEGSDE in Eq. 6.
Algorithm 1 Sample from EEGSDE using the Euler-Maruyama method Require: Number of steps N ∆t = TN z ← (x− x,h), where x ∼ N (0, 1), h ∼ N (0, 1) {Sample from the prior pT (zT )} for i = N to 1 do t← i∆t gx ← ∇xE(z, c, t), gh ← ∇hE(z, c, t) {Calculate the gradient of the energy function} g ← (gx − gx, gh) {Subtract the CoM of the gradient} F ← f(t)z + g(t)2( 1√
βt|0 ϵθ(z, t) + g)
ϵ← (ϵx − ϵx, ϵh), where ϵx ∼ N (0, 1), ϵh ∼ N (0, 1) z ← z − F∆t+ g(t) √ ∆tϵ {Update z according to Eq. (6)}
end for return z
C CONDITIONAL NOISE PREDICTION NETWORKS
In Eq. (6), we can alternatively use a conditional noise prediction network ϵθ(z, c, t) for a stronger guidance, as follows
dz = [f(t)z + g(t)2( 1√ βt|0 ϵθ(z, c, t)
+ (∇xE(z, c, t)−∇xE(z, c, t),∇hE(z, c, t)))]dt+ g(t)d(w̃x, w̃h), zT ∼ pT (zT ).
The conditional noise prediction network is trained similarly to the unconditional one, using the following MSE loss
min θ
EtEq(c,z0,zt)w(t)∥ϵθ(zt, c, t)− ϵt∥ 2.
D PARAMETERIZATION OF NOISE PREDICTION NETWORKS
We parameterize the noise prediction network following Hoogeboom et al. (2022), and we provide the specific parameterization for completeness. For the unconditional model ϵθ(z, t), we first concatenate each atom feature hi and t, which gives hi′ = (hi, t). Then we input x and h′ = (h1′, . . . ,hM ′) to the EGNN as follows
(ax,ah′) = EGNN(x,h′)− (x,0).
Finally, we subtract the CoM of ax, and gets the parameterization of ϵθ(z, t):
ϵθ(z, t) = (a x − ax,ah),
where ah comes from discarding the last component of ah′ that corresponds to the time.
For the conditional model ϵθ(z, c, t), we additionally concatenate c to the atom feature hi, i.e., hi′ = (hi, t, c), and other parts in the parameterization remain the same.
E DETAILS OF ENERGY FUNCTIONS
E.1 PARAMETERIZATION OF TIME-DEPENDENT MODELS
The time-dependent property prediction model g(zt, t) is parameterized using the second component in the output of EGNN (see Section 3) followed by a decoder (Dec):
g(zt, t) = Dec(EGNN h(xt,h ′ t)), h ′ t = concatenate(ht, t), (11)
where the concatenation is performed on each atom feature, and the decoder is a small neural network based on Satorras et al. (2021b). This parameterization ensures that the energy function
E(zt, c, t) is invariant to orthogonal transformations, and thus the distribution of generated samples is also invariant according to Theorem 2.
Similarly, the time-dependent multi-label classifier is parameterized by EGNN as
m(zt, t) = σ(Dec(EGNN h(xt,h ′ t))), h ′ t = concatenate(ht, t).
The multi-label classifier has the same backbone to the property prediction model in Eq. (11), except that the decoder outputs a vector of dimension L, and the sigmoid function σ is adopted for multilabel classification. Similarly to Eq. (11), the EGNN in the multi-label classifier guarantees the invariance of the distribution of generated samples according to Theorem 2.
E.2 TRAINING OBJECTIVES OF ENERGY FUNCTIONS
Time-dependent property prediction model. Since the quantum property is a scalar, we train the time-dependent property prediction model g(zt, t) using the ℓ1 loss
EtEq(c,z0,zt)|g(zt, t)− c|, where t is uniformly sampled from [0, T ].
Time-dependent multi-label classifier. Since the fingerprint is a bit map, predicting it can be viewed as a multi-label classification task. Thus, we use a time dependent multi-label classifier m(zt, t), and train it using the binary cross entropy loss
EtEq(c,x0,xt) L∑ l=1 cl logml(zt, t) + (1− cl) log(1−ml(zt, t)),
where t is uniformly sampled from [0, T ], and ml(zt, t) is the l-th component of m(zt, t).
F EXPERIMENTAL DETAILS
F.1 HOW TO GENERATE MOLECULES TARGETED TO MULTIPLE QUANTUM PROPERTIES
When we want to generate molecules with K quantum properties c = (c1, c2, . . . , cK), we combine energy functions for single properties linearly as E(zt, c, t) = ∑K k=1Ek(zt, ck, t), where Ek(zt, ck, t) = sk|gk(zt, t) − ck|2 is the energy function for the k-th property, sk is the scaling factor and gk(zt, t) is the time-dependent property prediction model for the k-th property. Then we use the gradient of E(zt, c, t) to guide the reverse SDE as described in Eq. (6).
F.2 THE “U-BOUND” AND “#ATOMS” BASELINES
The “U-bound” and “#Atoms” baselines are from Hoogeboom et al. (2022). The “U-bound” baseline shuffles the labels in Db and then calculate the loss of ϕc on it, which can be regarded as an upper bound of the MAE. The “#Atoms” baseline predicts a quantum property c using only the number of atoms in a molecule.
F.3 GENERATING MOLECULES WITH DESIRED QUANTUM PROPERTIES
For the noise prediction network, we use the same setting with EDM (Hoogeboom et al., 2022) for a fair comparison, where the models is trained ∼2000 epochs with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
The EGNN used in the energy function has 192 hidden features and 7 layers. We train 2000 epochs with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999.
During evaluation, we need to generate a set of molecules. Following the EDM (Hoogeboom et al., 2022), we firstly sample the number of atoms in a molecule M ∼ p(M) and the property value c ∼ p(c|M) (or c1 ∼ p(c1|M), c2 ∼ p(c2|M), . . . , cK ∼ p(cK |M) for multiple properties). Here p(M) is the distribution of molecule sizes on training data, and p(c|M) is the distribution of the property on training data. Then we generate a molecule given M, c.
F.4 GENERATING MOLECULES WITH TARGET STRUCTURES
Computation of Tanimoto similarity. Let Sg be the set of bits that are set to 1 in the fingerprint of a generated molecule, and St be the set of bits that are set to 1 in the target structure. The Tanimoto similarity is defined as |Sg ∩ St|/|Sg ∪ St|, where | · | denotes the number of elements in a set. Experimental details on QM9. For the backbone of the noise prediction network, we use a threelayer MLP with 768, 512 and 192 hidden nodes as embedding to encode the fingerprint and add the output of it with the embedding of atom features h, which is then fed into the following EGNN. The EGNN has 256 hidden features and 9 layers. We train it 1500 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The energy function is trained with 1750 epoch with a batch size of 128, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. Its EGNN has 192 hidden features and 7 layers. For the baseline cG-SchNet, we reproduce it using the public code. Since the default data split in cG-SchNet is different with ours, we train the cG-SchNet under the same data split with ours for a fair comparison. We report results at 200 epochs (there is no gain on similarity metric after 150 epochs). We evaluate the Tanimoto similarity on the whole test set.
Experimental details on GEOM-Drug. We use the same data split of GEOM-Drug with Hoogeboom et al. (2022), where training, validation and test set include 554K, 70K and 70K samples respectively. We train the noise prediction network and the energy function on the training set. For the noise prediction network, we use the recommended hyperparameters of EDM (Hoogeboom et al., 2022), where the EGNN has 256 hidden features and 4 layers, and the other part is the same as that in QM9. We train 10 epoch with a batch size of 64, a learning rate of 0.0001 with the Adam optimizer and an exponential moving average (EMA) with a rate of 0.9999. The backbone of the energy function is the same as that in QM9 and we train the energy function for 14 epoch. We evaluate the Tanimoto similarity on randomly selected 10K molecules from the test set.
G ADDITIONAL RESULTS
G.1 ABLATION STUDY ON NOISE PREDICTION NETWORKS AND ENERGY GUIDANCE
We perform an ablation study on the conditioning, i.e., use conditional or unconditional noise prediction networks, and the energy guidance. When neither the conditioning nor the energy guidance is adopted, a single unconditional model can’t perform conditional generation, and therefore we only report its atom stability and the molecule stability. As shown in Table 5, Table 6 and Table 7, the conditional noise prediction network improves the MAE compared to the unconditional one, and the energy guidance improves the MAE compared to a sole conditional model. Both the conditioning and the energy guidance do not affect the atom stability and the molecule stability much.
G.2 RESULTS ON NOVELTY, ATOM STABILITY AND MOLECULE STABILITY
For completeness, we also report novelty, atom stability and molecule stability on 10K generated molecules. Below we briefly introduce these metrics.
• Novelty (Simonovsky & Komodakis, 2018) is the proportion of generated molecules that do not appear in the training set. Specifically, let G be the set of generated molecules, the novelty is calculated as 1 − |G∩Db||G| . Note that the novelty is evaluated on Db, since the reproduced conditional EDM and our method are trained on Db. This leads to an inflated value compared to the one evaluated on the whole dataset (Hoogeboom et al., 2022).
• Atom stability (AS) (Hoogeboom et al., 2022) is the proportion of atoms that have the right valency. Molecule stability (MS) (Hoogeboom et al., 2022) is the proportion of generated molecules where all atoms are stable. We use the official implementation for the two metrics from the EDM paper (Hoogeboom et al., 2022).
As shown in Table 8 and Table 9, conditional EDM and EEGSDE with a small scaling factor have a slightly better stability, and EEGSDE with a large scaling factor has a slightly better novelty in general. The additional energy changes the distribution of generated molecules, which improves the novelty of generated molecules in general. Since there is a tradeoff between novelty and stability (see the caption of Table 5 in the EDM paper (Hoogeboom et al., 2022)), a slight decrease on the stability is possible when the scaling factor is large.
G.3 VISUALIZATION OF THE EFFECT OF THE SCALING FACTOR
We visualize the effect of the scaling factor in Figure 4, where the generated structures align better as the scaling factor grows.
G.4 REPRODUCE
We compare our reproduced results and the original results of conditional EDM (Hoogeboom et al., 2022) in Table 10. The results are consistent.
G.5 GENERATING MOLECULES WITH TARGET STRUCTURES ON GEOM-DRUG
We plot generated molecules on GEOM-Drug in Figure 5, and it can be observed that the atom types of generated molecules with EEGSDE often match the target better than conditional EDM.
G.6 THE DISTRIBUTIONS OF PROPERTIES
We plot the distribution of the properties of the training set, as well as the distribution of the properties of generated molecules (calculated by the Gaussian software). As shown in Figure 6, these distributions match well. | 1. What is the focus and contribution of the paper regarding equivariant molecule generation?
2. What are the strengths and weaknesses of the proposed approach, particularly in its formulation and results?
3. Do you have any concerns or questions about the choice of energy functions or the training process of the time-dependent property prediction model?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors present an energy-guided score model formulation for the task of equivariantly generating molecules. This work builds upon lots of recent references on SDE score matching models, equivariant networks (EGNNs) and specifically the work of Hoogeboom et al (2022), EDM. The main contribution of the paper is the addition of an energy function in the score matching algorithm, which can be used to generate molecules with desired properties.
Strengths And Weaknesses
Overall, the paper is somewhat incremetal but in general technically sound and a pleasure to read.
Strengths:
Clear motivation to solve a real problem (inverse molecular design)
A lot of effort went into explaining previous methods. This is a fantastic introductory paper that requires no reference reading for context.
Simple and elegant formulation
Code provided in the supplementary material
Weaknesses:
Main results on Tables 1, 2, 3 could be provided with standard deviations to assess how stable results are depending on the chosen seed
Supplied code could be cleaned / improved with clearer instructions on how to run the experiments in the paper.
Other comments:
It would be fantastic if the authors physically motivated the choice for using energy functions (Eq. 6). The reverse SDE is presented without a lot of justification
In Section 6 the authors claim that they tried a binary cross-entropy loss for generating molecules with desired structures, but ended up using a mean squared error loss. In Appendix D, however, the binary cross entropy is the one described for this task.
It is somewhat unclear to me whether the time-dependent property prediction model is jointly trained with the score model or whether this is trained separately. It would be great if the authors could clarify along these lines.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written and provides a thorough related work section to help understand the main contribution of the paper (inverse molecular design).
Novelty-wise this work is somewhat incremental, as it builds on a lot of recent work (e.g. SDE formulation of denoising diffusion models, EGNNs) although to the best of my knowledge it is the first to propose how to do inverse molecular design with these.
The authors provide code in the supplementary information to reproduce some of the experiments, although the quality of the code could significantly be improved. |
ICLR | Title
Domain-Free Adversarial Splitting for Domain Generalization
Abstract
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
N/A
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
1 INTRODUCTION
Deep learning approach has achieved great success in image recognition (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). However, deep learning methods mostly succeed in the case that the training and test data are sampled from the same distribution (i.e., the i.i.d. assumption). However, this assumption is often violated in real-world applications since the equipments/environments that generate data are often different in training and test datasets. When there exists distribution difference (domain shift (Torralba & Efros, 2011)) between training and test datasets, the performance of trained model, i.e., learner, will significantly degrade.
To tackle the domain shift issue, domain adaptation approach (Pan & Yang, 2010; Daume III & Marcu, 2006; Huang et al., 2007) learns a transferable learner from source domain to target domain. Domain adaptation methods align distributions of different domains either in feature space (Long et al., 2015; Ganin et al., 2016) or in raw pixel space (Hoffman et al., 2018), which relies on unlabeled data from target domain at training time. However, in many applications, it is unrealistic to access the unlabeled target data, therefore this prevents us to use domain adaptation approach in this setting, and motivates the research on the learning problem of domain generalization.
Domain generalization (DG) approach (Blanchard et al., 2011; Muandet et al., 2013) commonly uses several source domains to train a learner that can generalize to an unseen target domain. The underlying assumption is that there exists a latent domain invariant feature space across source domains and unseen target domain. To learn the domain invariant features, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b) explicitly align distributions of different source domains in feature space. (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019) split source domains into meta-train and meta-test to simulate domain shift and train learner in a meta-learning approach. (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Ryu et al., 2020) augment images or features to train learner to enhance generalization capability.
Conventional domain generalization methods assume that the domain labels are available. But in a more realistic scenario, the domain labels may be unknown (Wang et al., 2019). To handle this domain-free setting, Carlucci et al. (2019) combines supervised learning and self-supervised learning to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and trains a domain invariant feature extractor via adversarial training. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Another line of works (Volpi et al., 2018; Qiao et al., 2020) tackle the single source setting that the training set comprises a single domain, and the train and test data are from different domains.
In this work, we focus on a general learning scenario of domain generalization as follows. First, we do not know the domain label of each data and do not assume that there are several domains in the training dataset. Second, we do not assume that the training and test data are from different domains (e.g., styles). However, the previous domain-free DG methods (Matsuura & Harada, 2020) commonly evaluate on the datasets (e.g., PACS) composed of several domains though they do not use domain labels in training.
In our domain-free setting, we do not assume and know the domains in the training dataset, we therefore model domain generalization as a learning problem that the learner should be able to generalize well for any splitting of train/val subsets, i.e., synthetic source/target domains, over the training dataset. This explicitly enforces that the trained learner should be generalizable for any possible domain shifts within the training dataset.
To achieve this goal, we propose an adversarial splitting model that is a min-max optimization problem, due to the difficulty of enumerating all splittings. In this min-max problem, we adversarially split training dataset to train/val subsets by maximizing the domain shift between them based on the given learner, and then update learner by minimizing the prediction error on val-subset using meta-learning approach given the splitting. By optimizing this min-max problem, we enforce the learner to generalize well even in the worst-case splitting. We also investigate L2-normalization of features in our domain generalization method. It is surprisingly found that L2-normalization can improve performance of learner and mitigate gradient explosion in the meta-learning process of DG. We further theorectically analyze the underlying reasons for this finding. This proposed domain generalization approach is dubbed Domain-Free Adversarial Splitting, i.e., DFAS.
To verify the effectiveness of our method, we conduct extensive experiments on benchmark datasets of PACS, Office-Home and CIFAR-10 under different settings with multiple/single source domains. In experiments that the training data are from several source domains, our method achieves state-ofthe-art results on both PACS and Office-Home datasets. We also find that our method significantly outperforms baselines in experiments that the training data are from a single source domain on PACS and CIFAR-10. We also confirm the effectiveness of our method by ablation study.
Based on domain adaptation theory, we also derive an upper bound of the generalization error on unseen target domain. We analyze that the terms in this upper bound are implicitly minimized by our method. This theoretical analysis partially explains the success of our method.
2 RELATED WORKS
We summarize and compare with related domain generalization (DG) methods in two perspectives, i.e., DG with domain labels and DG without domain labels.
DG with domain labels. When the domain labels are available, there are three categories of methods for DG. First, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b; Piratla et al., 2020) learn domain invariant features by aligning feature distributions or by common/specific feature decomposition. Second, (Li et al., 2019a; Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019; Du et al., 2020a;b) are based on meta-learning approach that splits given source domains into meta-train and meta-test domains and trains learner in an episodic training paradigm. Third, (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Wang et al., 2020) augment fake domain data to train learner for enhancing generalization capability of learner. Our method may mostly relate to the above second category of methods. But differently, we consider the DG problem in domain-free setting and adversarially split training dataset to synthesize domain shift in a principled min-max optimization method, instead of using leave-one-domain-out splitting in these methods.
DG without domain labels. When the domain label is unavailable, to enhance generalization ability of learner, Wang et al. (2019) extracts robust feature representation by projecting out superficial patterns like color and texture. Carlucci et al. (2019) proposes to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and learns domain invariant features via adversarial training of feature extractor and domain discriminator. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Volpi et al. (2018) and Qiao et al. (2020) propose adversarial data augmentation to tackle the setting that the training set comprises a single domain. In methodology, these methods either explicitly force the learner to extract robust features (Wang et al., 2019; Matsuura & Harada, 2020; Huang et al., 2020) or augment new data to increase training data (Carlucci et al., 2019; Qiao et al., 2020; Volpi et al., 2018). While our method is a novel meta-learning approach for DG by introducing adversarial splitting of training dataset during training, without relying on data/domain augmentation.
3 METHOD
In our setting, since we do not assume and know the domains in the training dataset, the training data could be independently sampled from several underlying source domains or just from a single source domain. We denote S = {(xi, yi)}Ni=1 as the training dataset. Our goal is to train the learner with S that can generalize well on an unseen target domain.
In the following sections, we introduce details of our proposed model in Sect. 3.1, followed by its optimization method in Sect. 3.2. We also investigate L2-normalization for domain generalization in Sect. 3.3. Theoretical analysis for our method is presented in Sect. 4. Experimental results are reported in Sect. 5. Sect. 6 concludes this paper.
3.1 DOMAIN-FREE ADVERSARIAL SPLITTING MODEL
As mentioned in Sect. 1, we model DG as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. The learner is trained using meta-learning approach (Finn et al., 2017). To formulate our idea mathematically, we first introduce some notations. We denote f as a function/learner (f could be a deep neural network, e.g., ResNet (He et al., 2016)) that outputs classification score of the input image, l as the loss such as cross-entropy, St and Sv as the train-subset and val-subset respectively such that S = St ∪ Sv and St ∩ Sv = ∅. The formulated optimization problem for domain generalization is
min w
1 |Γξ| ∑ Sv∈Γξ L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(1)
In Eq. (1), Γξ = {Sv : Sv ⊂ S, |Sv| = ξ} is the set of all possible val-subsets of S with length of ξ, St = S − Sv is train-subset paired with each Sv , L(θ(w);Sv) = 1|Sv| ∑ (x,y)∈Sv l(fθ(w)(x), y) is the loss on Sv, where θ(w) is the parameters of f , L(θ;St, w) is L(θ;St) with θ initialized by w andR(w) is regularization term. In the optimization model of Eq. (1), the parameter θ(w) of learner trained on St is treated as a function of the initial parameter w. To force the learner trained on St to generalize well on Sv, we directly minimize the loss L(θ(w);Sv), dubbed generalization loss, on val-subset Sv, w.r.t. the parameter θ(w) trained on St. Solving Eq. (1) will force the learner to be able to generalize well from any train-subset to corresponding val-subset.
Since |Γξ| may be extremely large, it is infeasible to enumerate all possible train/val splittings. Thus, we propose the following adversarial splitting model instead,
min w max Sv∈Γξ
L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(2)
In the min-max problem of Eq. (2), the train/val (St/Sv) splitting is optimized to maximize the generalization loss to increase the domain shift between train and val subsets by finding the hardest splitting to the learner. While w is optimized by minimizing the generalization loss of learner over
the splitting. Solving this adversarial splitting optimization model in Eq. (2) enforces the learner to be generalizable even for the worst-case splitting. We therefore expect that the trained learner is robust to the domain shifts within the training dataset. For the regularization termR(w), we set it to be the training loss on St (i.e.,R(w) = L(w;St)), which additionally constrains that the learner with parameter w should be effective on St (Li et al., 2018a). The effect of the hyper-parameter ξ will be discussed in Appendix A.2.
In conventional adversarial machine learning, adversarial training is imposed on adversarial samples and learner to increase robustness of the learner to adversarial corruption (Goodfellow et al., 2015). While in our optimization model of Eq. (2), adversarial training is conducted on data splitting and learner to force the learner to be robust to domain shift between train/val subsets. Our model bridges adversarial training and meta-learning. It is a general learning framework for domain generalization and is a complement to adversarial machine learning.
3.2 OPTIMIZATION
This section focuses on the optimization of Eq. (2). Since Eq. (2) is a min-max optimization problem, we alternately update Sv and w by fixing the other one as known. We should also consider the inner loop for optimization of θ(w) in the bi-layer optimization problem of Eq. (2). We next discuss these updating steps in details. The convergence and computational cost of this algorithm will be also discussed in Appendix A.3 and A.4 respectively.
Inner loop for optimization of θ(w). We adopt finite steps of gradient descent to approximate the minimizer θ(w) of the inner objective L(θ;St) with initial value w. This approximation technique has been introduced in machine learning community several years ago (Sun & Tappen, 2011; Finn et al., 2017; Fan et al., 2018). For convenience of computation, following (Li et al., 2018a; Dou et al., 2019), we only conduct gradient descent by one step as
θ(w) = w − α∇θL(θ;St)|θ=w, (3)
where α is the step size of inner optimization and its effect will be discussed in Appendix A.2.
Optimizingw with fixed Sv . For convenience, we denote gtw = ∇θL(θ;St)|θ=w. Fixing Sv (St is then fixed), w can also be updated by gradient descent, i.e.,
w = w − η∇w ( L(w − αgtw;Sv) +R(w) ) , (4)
where η is the step size of outer optimization.
Finding the hardest splitting Sv with fixed w. Fixing w, to find Sv ∈ Γξ to maximize L(w − αgtw;Sv), we do first order Taylor expansion for L(w−αgtw;Sv) by L(w−αgtw;Sv) ≈ L(w;Sv)− α 〈gtw, gvw〉, where gvw = ∇θL(θ;Sv)|θ=w and 〈·, ·〉 denotes the inner product. From the definition of L, gtw and gvw, the optimization problem of maxSv∈Γξ{L(w;Sv) − α 〈gtw, gvw〉} can be written as maxSv∈Γξ{ 1|Sv| ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), g t w〉}. This problem is equivalent to the following splitting formulation:
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (5)
where we introduced an auxiliary variable A. Eq. (5) can be solved by alternatively updating Sv and A. Given A, we compute and rank the values of l (fw(x), y) − α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv. Given Sv (St is then given), we update A by A = gtw = 1 |St| ∑ (x,y)∈St ∇wl(fw(x), y). We also discuss details and convergence of this alternative iteration in Appendix C. Since computing gradient w.r.t. all parameters is time and memory consuming, we only compute gradient w.r.t. parameters of the final layer of learner f .
3.3 L2-NORMALIZATION FOR EXTRACTED FEATURE
L2-normalization has been used in face recognition (Liu et al., 2017; Wang et al., 2018) and domain adaptation (Saito et al., 2019; Gu et al., 2020), but is rarely investigated in domain generalization. We investigate L2-normalization in domain generalization in this paper. It is found surprisingly in experiments that L2-normalization not only improves the performance of learner (see Sect. 5.4), but
also mitigates gradient explosion (see Sect. 5.5) that occurs frequently during the training of metalearning for DG (Finn et al., 2017; Dou et al., 2019). We next discuss details of L2-normalization in our method and analyze why L2-normalization mitigates gradient explosion.
Feature L2-normalization. The L2-normalization is used as a component of our learner f . Specifically, we decompose f into feature extractor fe (e.g., the convolutional layers of ResNet), the transform fn representing L2-normalization and classifier f c, i.e., f = f c ◦ fn ◦ fe. The feature of input image x extracted by fe is fed to fn to output an unit vector z, i.e., z = fn(fe(x)) = f
e(x) ‖fe(x)‖ .
The classifier f c consists of unit weight vectors W = [w1, w2, · · · , wK ], where K is the number of classes and ‖wk‖ = 1,∀k. f c takes z as an input and outputs the classification score vector σm,s(W T z). σm,s(·) is the marginal softmax function defined by
[σm,s(W T z)]k = exp(s(wTk z −mI{k=y}))∑K k′=1 exp(s(w T k′z −mI{k′=y})) , k = 1, 2, · · · ,K, (6)
where y is the label of x, [·]k indicates the k-th element, I{a} is the indicator function that returns 1 if a is true, 0 otherwise, m and s are hyper-parameters indicating margin and radius respectively.
Analysis of mitigating gradient explosion. We find that L2-normalization mitigates gradient explosion in the training of meta-learning for domain generalization. For the sake of simplicity, we analyze gradient norm of loss w.r.t. parameters of f c in the meta-learning process of domain generalization, with fe as fixed function. Without loss of generality, we consider the case that K = 2 (i.e., binary classification), s = 1 and m = 0. In this case, we have the following proposition. Proposition 1. Under the above setting, if the input feature of f c is L2-normalized, the gradient norm of loss w.r.t. parameters of f c in the meta-learning process of DG is bounded.
Sketch of proof. Given feature z, the loss of binary classification is L(w; z) = −y log(σ(wT z))− (1 − y) log(1 − σ(wT z)), where σ is the sigmoid function. Let w′ = w − α∇wL(w; z), then ∇wL(w′; z) = (I − αH)∇w′L(w′; z), where H is the Hessian matrix. The gradient norm ‖∇wL(w′; z)‖ ≤ ‖I − αH‖ ‖∇w′L(w′; z)‖ ≤ (1 + |α| ‖H‖) ‖∇w′L(w′; z)‖. Since ∇w′L(w′; z) = (p− y)z and H = p(1− p)zzT where p = σ(wT z), ‖H‖ = supu:‖u‖=1 ‖Hu‖ ≤ supu:‖u‖=1
∥∥zzTu∥∥ ≤ ‖z‖2 and ‖∇w′L(w′; z)‖ ≤ ‖z‖. If ‖z‖ = 1, we have ‖∇wL(w′; z)‖ ≤ 1 + |α|. According to Proposition 1, L2-normalization can mitigate gradient explosion under the above setting. The analysis of gradient norm of loss w.r.t. parameters of both f c and fe in the meta-learning process is much more complex, left for our future work.
4 THEORETICAL ANALYSIS
This section presents theoretical understanding of our method. We first derive a generalization error bound on target domain in theorem 1 for the general setting of meta-learning for DG. Then, based on theorem 1, we theoretically explain the reason on the success of our method.
Without loss of generality, we consider binary classification problem. We denoteH as the set of all possible f , i.e., H = {fw : w ∈ RM}, where M is the number of parameters. For any Sv ∈ Γξ and St = S − Sv, we let HSt = {fθ(w) : θ(w) = arg minθ L(θ;St, w), w ∈ RM}. The metalearning approach for DG is to find a function in HSt to minimize classification loss on Sv. Note that, although the training samples in S may be sampled from several distributions, they can still be seen as being i.i.d. sampled from a mixture of these distributions. We next respectively denote P = ∑D d=1 βdPd as the mixture distribution with βd representing the sampling ratio of the d-th source domain, ΨQ(f) = E(x,y)∼Q[I{Ψ(f(x))6=y}] as the generalization error on distribution Q of unseen target domain, ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} as the empirical error on Sv, V C(H) as the VC-dimension of H, and Ψ(·) as the prediction rule such as the Bayes Optimal Predictor. Based on the domain adaptation theory (Ben-David et al., 2007; 2010) and inspired by the analysis in (Saito et al., 2019), we have the following theorem. Theorem 1. Let γ be a constant, assume EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}], then given any Sv ∈ Γξ and St = S−Sv and for any δ ∈ (0, 1), with probability at least 1−2δ, we have ∀f ∈ HSt ,
Table 1: Results of MSDS experiment on PACS based on ResNet18 and ResNet50.
Backbone Target D-SAM JiGen MASF MMLD MetaReg RSC Base-line DFAS (ours)
ResNet18
A 77.3 79.4 80.3 81.3 83.7 83.4 80.2±0.4 84.2±0.1 C 72.4 75.3 77.2 77.2 77.2 80.3 75.5±0.5 79.5±0.3 P 95.3 96.0 95.0 96.1 95.5 96.0 95.9±0.1 95.8±0.1 S 77.8 71.4 71.7 72.3 70.3 80.9 70.1±0.9 82.1±0.4
Avg 80.7 80.5 81.0 81.8 81.7 85.1 80.4 85.4
ResNet50
A - - 82.9 - 87.2 87.9 86.1±0.2 89.1±0.1 C - - 80.5 - 79.2 82.2 79.2±0.4 84.6±0.2 P - - 95.0 - 97.6 97.9 97.6±0.1 96.8±0.2 S - - 72.3 - 70.3 83.5 70.3±0.7 85.6±0.3
Avg - - 82.7 - 83.6 87.8 83.3 89.0
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
ξ
( C2 + 4
δ
) + C3, (7)
where B(Sv) = C1 − inf
f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}, (8)
C1 = supS′v∈Γξ supf ′∈HS−S′v EQ[I{l(f ′(x),y)>γ}], C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl S−S′v ) , C3 ≥ supS′v∈Γξ inff ′∈HS−S′v { Ψl P (f ′) + ΨlQ (f ′)}, HΨlS−St = {Ψl ◦ f : f ∈ HS−St}, Ψl is a loss-related indicator defined by
Ψl(f(x)) = { 1 if l(f(x), y) > γ. 0 otherwise .
(9)
Proof is given in Appendix D. The assumption of EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}] in theorem 1 is realistic because the data of Q is not accessed at training time, and the learner trained on data of P should have smaller classification loss on P thanQ. In theorem 1, C1, C2, C3 are constants to f . In Eq. (7), the generalization error ΨlQ (f) on Q can be bounded by the empirical error ̂ Ψl Sv
(f) on Sv, the term B(Sv) that measures the discrepancy between P and Q, and the last two constant terms in Eq. (7).
To obtain lower ΨlQ (f), we need to minimize ̂ Ψl Sv (f) and B(Sv). Minimizing B(Sv) w.r.t. Sv is equivalent to
max Sv∈Γξ inf f∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f(x),y)>γ}. (10)
Intuitively, Eq. (10) means to find a Sv ∈ Γξ such that the infimum ratio of examples in Sv having loss greater than γ is maximized. This min-max problem of Eq. (10) for computing the error bound bears the similar idea as our min-max problem. Our adversarial splitting model in Eq. (2) can implicitly realize the goal of Eq. (10) and meanwhile ensure lower ̂ΨlSv (f) for any Sv .
The maximization in Eq. (10) corresponds to our adversarial splitting that finds the hardest val-subset Sv for the learner in Eq. (2). The infimum in Eq. (10) corresponds to the minimization of the loss in Eq. (2) on Sv w.r.t. the learner parameterized by θ(w). Instead of using indicator function I in Eq. (10), in our model of Eq. (2), we choose differentiable classification loss for easier optimization.
5 EXPERIMENTS
We verify the effectiveness of our method in three types of experimental settings: Multi Source with Domain Shift (MSDS) that the training data are from several source domains and there exists domain
shift between training and test data, Single Source with Domain Shift (SSDS) that the training data are from a single source domain and there exists domain shift between training and test data, and Same Source and Target Domain (SSTD) that the training and test data are from a same single domain. The source codes will be released online.
We conduct experiments on three benchmark datasets. PACS (Li et al., 2017) contains four domains, including art painting (A), cartoon (C), photo (P), sketch (S), sharing seven classes. OfficeHome (Volpi et al., 2018), a dataset widely used in domain adaptation and recently utilized in domain generalization, consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), Real World (Rw), sharing 65 classes. Both of these two datasets are utilized to conduct experiments in settings of MSDS and SSDS. CIFAR-10 (Krizhevsky et al., 2009) is taken for the experimental setting of SSTD.
5.1 TYPE I: MULTI SOURCE WITH DOMAIN SHIFT (MSDS)
In the setting of MSDS, following (Carlucci et al., 2019), we use leave-one-domain-out crossvalidation, i.e., training on three domains and testing on the remaining unseen domain, on PACS and Office-Home. Note that the domain labels are not used during training. We adopt ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. Full implementation details are reported in Appendix B.
We compare our method with several state-of-the-art methods, including D-SAM (D’Innocente & Caputo, 2018), JiGen (Carlucci et al., 2019), MASF (Dou et al., 2019), MMLD (Matsuura & Harada, 2020), MetaReg (Balaji et al., 2018), RSC (Huang et al., 2020). The results on PACS and Office-Home are reported in Table 1 and Table 2 respectively. Our DFAS achieves state-of-the-art results based on both ResNet18 and ResNet50 on both PACS (85.4%, 89.0%) and Office-Home (64.3%, 70.3%), outperforming RSC by 0.3% and 1.2% on PACS based on ResNet18 and ResNet50 respectively, and by 1.2% on Office-Home based on ResNet18 (these methods do not conduct experiment on Office-Home using ResNet50). Compared with Baseline that directly aggregates three source domains to train learner with standard fully-connected layer as classifier f c, our method of DFAS improves its performance by 5.0% and 5.7% on PACS based on ResNet18 and ResNet50 respectively, and by 2.1% and 2.1% on Office-Home based on ResNet18 and ResNet50 respectively. In Table 1, on PACS, DFAS significantly outperforms Baseline in almost all tasks except when P is taken as target domain. Note that, in the task that domain S, of which the style is extremely different from rest three domains, is target domain, our DFAS boosts the accuracy of Baseline by 12.0% and 15.3% based on ResNet18 and ResNet50 respectively. This indicates that our method can generalize well when domain shift is large. Office-Home is challenging for DG since the number of classes is larger than other datasets. As shown in Table 2, our DFAS outperforms Baseline stably in almost all tasks on Office-Home. These performance improvements demonstrate the effectiveness of our method in the case that training data are from multi-source domains and the unseen target domain is different from source domains.
5.2 TYPE II: SINGLE SOURCE WITH DOMAIN SHIFT (SSDS)
We conduct this type of experiment on PACS based on ResNet18. In this experiment, we train learner on one domain and test on each of the rest three domains, resulting in total 12 tasks. Implementation details are shown in Appendix B. Our method is compared with related methods, including Baseline that directly trains learner with standard fully-connected layer as classifier f c on the source domain, Jien (Carlucci et al., 2019) and SagNet (Nam et al., 2019). Results are reported in Table 3. Our method of DFAS outperforms Baseline and SagNet by 4.2% and 4.7% respectively. We observe that our method outperforms Baseline in 10 tasks among all 12 tasks. The performance boosts are large in tasks when domain S is set to be source domain. These performance improvements demonstrate the effectiveness of our method in the case that training data are from single source domain and the unseen target domain is different from source domain.
5.3 TYPE III: SAME SOURCE AND TARGET DOMAIN (SSTD)
We also apply our DG method to the common recognition task that the training and test data are from a same domain, i.e., SSTD, on CIFAR-10 dataset. To investigate the effect of training size, we sample different sizes of training data from the provided training set (i.e., source domain). Implementation details are in Appendix B. As shown in Table 4, our DFAS outperforms Baseline, JiGen and MMLD by 2.4%, 3.6% and 4.3% respectively on average. The results of JiGen and MMLD are obtained by running their codes on CIFAR-10. We observe that DFAS outperforms Baseline and compared methods in all different numbers of training data. In general, the performance boost is larger when the number of training data is smaller. This may be because the learner is more possible to be overfitting when the training size is smaller and our DFAS is designed to extract better generalizable features.
5.4 ABLATION STUDY
To further verify the effectiveness of each component of our method, we conduct additional ablation experiments on PACS dataset based on ResNet18 in both MSDS and SSDS setting. The results are reported in Table 5 and Table 6.
In Table 5 and 6, L2-norm denotes feature L2-normalization defined in Sect.3.3. Rand-split means the random splitting strategy that we randomly split the train/val subsets at each step of updating the parameters of learner. Adv-split denotes the adversarial splitting model that we update the worst-case splitting by solving the maximization problem in Eq. (5) per epoch in training process.
Effectiveness of L2-normalization. In Table 5, L2-norm (82.7%) outperforms Baseline (80.4%) by 2.3% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms Adv-split (83.6%) by 1.8% in MSDS setting. In Table 6, L2-norm (63.1%) outperforms Baseline (62.4%) by 0.7% and L2norm + Adv-split (i.e., DFAS) (66.6%) outperforms Adv-split (65.1%) by 1.5% in SSDS setting.
These performance improvements demonstrate that the feature L2-normalization is useful in domain generalization.
Effectiveness of adversarial splitting model. In Table 5, Adv-split (83.6%) outperforms Baseline (80.4%) by 3.2% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms L2-norm (82.7%) by 2.7% in MSDS setting. In Table 6, Adv-split (65.1%) outperforms Baseline (62.4%) by 2.7% and L2-norm + Adv-split (i.e., DFAS) (66.6%) outperforms L2-norm (63.1%) by 3.5% in SSDS setting. These results indicating that our proposed adversarial splitting model is effective.
Effectiveness of adversarial splitting over random splitting. In Table 5, L2-norm + Adv-split (85.4%) outperforms L2-norm + Rand-split (84.3%) by 1.1% and Adv-split (83.6%) outperforms Rand-split (82.8%) by 0.8% in MSDS setting. In Table 5, L2-norm + Adv-split (66.6%) outperforms L2-norm + Rand-split (65.4%) by 1.2% and Adv-split (65.1%) outperforms Rand-split (64.1%) by 1.0% in SSDS setting. These results demonstrate that the adversarial splitting model outperforms the random splitting strategy in different experimental settings.
Due to space limit, we add more ablation experiments in Appendix A.1 to further compare different splittings, including adversarial splitting, domain-label-based splitting and random splitting.
5.5 MITIGATING GRADIENT EXPLOSION BY L2-NORMALIZATION.
To show that L2-normalization can mitigate gradient explosion, we conduct the same experiments independently for 50 times respectively with L2-normalization and without L2-normalization. Then we count the numbers of occurrences of gradient explosion that are reported in Table 7. From Table 7, we can observe that L2-normalization can mitigate gradient explosion.
6 CONCLUSION
In this paper, we unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework to tackle the general domain generalization problem. Extensive experiments show the effectiveness of the proposed method. We are interested in deeper theoretical understanding and more applications of our method in the future work.
A ANALYSIS
A.1 COMPARISON OF ADVERSARIAL SPLITTING AND DOMAIN-LABEL-BASED SPLITTING
In the section, we compare our adversarial splitting with domain-label-based splitting that is commonly used in meta-learning-based DG methods. Due to variations of style, poses, sub-classes, etc., the internal inconsistency within dataset is complicated. Domain-label partially capture the inconsistency, while cannot cover all possible internal inconsistency. Our adversarial splitting method does not rely on the domain label. It iteratively finds the hardest train/val splitting to the learner to maximize the inconsistency and train the learner to generalize well for the hardest splitting, in an adversarial training way. This strategy more flexibly investigates the possible inconsistency within training dataset, adaptively to the learner, and can potentially enhance the generalization ability of learner.
We first empirically show that the domain-label-based splitting (denoted as Label-split) is not as hard as our adversarial splitting (Adv-split) to the learner in Table 8. In Table 8, we report the values of objective function in Eq. (5) of Adv-split and Label-split by fixing the learner with different network parameters wi at different epoch (1th, 2th, 5th and 10th) in the training process. Larger value in the table indicates that the splitting is harder to the learner (i.e., network). It can be observed that the domain-label-based splitting (Label-split) is not as hard as Adv-split to learner.
We also conduct experiments on PACS in MSDS setting to fairly compare different splittings, including adversarial splitting (Adv-split), domain-label-based splitting (Label-split) and random splitting (Rand-split). The results are reported in Table 9. Table 9 shows that adversarial splitting outperforms random splitting and domain-label-based splitting when training data is from multiple domains.
When the training data are from only a single domain, our adversarial splitting also performs well (as in Table 3). However, domain-label-based splitting cannot be used in this setting, since there is no domain label available.
A.2 EFFECT OF HYPER-PARAMETERS
Effect of hyper-parameter ξ. In Fig. 1(a), we show the performance of our method when varying the hyper-parameter ξ, i.e., length of the val-subset Sv in adversarial splitting of training dataset. The best result is obtained when ξ = |S|2 , and the results are similar when ξ |S| ranges from 0.3 to 0.7.
Effect of hyper-parameter α. We evaluate the effect of α in MSDS setting on PACS dataset in Fig. 1(b). From Fig. 1(b), the ACC is stable to the values of α in large range of 1e-6 to 1e-4. Small α results in small step-size for parameter updating in meta-learning framework, and limits the benefits from meta-learning and adversarial splitting. Larger α results in larger step-size for gradient descent based network updating, which may fail to decrease the training loss from the optimization perspective.
Effect of hyper-parameter m. The effect of m is evaluated in MSDS setting on PACS dataset in Fig. 1(c). Fig. 1(c) shows that the result is not sensitive to the value of m.
A.3 CONVERGENCE
We testify the convergence of DFAS with errors and losses in different tasks in Fig. 2. In Fig. 2(a) and Fig. 2(b) , we show the classification error curves on target domains (A and Ar respectively) in the setting of MSDS. In Fig. 2(c), we show the training loss of DFAS in task A in MSDS setting. These training curves indicates that DFAS converges in the training process. We also observe that DFAS has better stability than Baseline, in Fig. 2(a) and Fig. 2(b).
A.4 COMPUTATIONAL COST OF ADVERSARIAL SPLITTING AND RANDOM SPLITTING
We compare the computational cost of adversarial splitting and random splitting in this section. Since we only update the worst-case splitting per epoch, instead of at each step of updating parameters, the computational cost is only slightly higher than that of random splitting. To show this, we compare the total training times of the adversarial splitting and random spitting in the same number of steps (20000), as in Table 10.
From Table 10, the training time of Adv-split is only 5.6% (0.33/5.90) higher than Rand-split.
A.5 VISUALIZATION OF FEATURE SPACE
We visualize the feature space learned by our method of DFAS and Baseline (shown in Fig. 3), by t-SNE (Maaten & Hinton, 2008). It appears that DFAS yields better separation of classes and better alignment of distributions of source and unseen target domains, which possibly explains the accuracy improvements achieved by our DFAS.
B IMPLEMENTATION DETAILS
For the setting of MSDS, we use ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. The dimension of the bottleneck layer is set to be 512 when the backbone is ResNet18 as in (Saito et al., 2019), and 256 when the backbone is ResNet50 as in (Gu et al., 2020). Following (Gu et al., 2020), s is set to 7.5 for PACS and 10.0 for Office-Home. m is set to 0.2 for PACS and 0.1 for Office-Home. ξ is set to |S|2 . SGD with momentum of 0.9 is utilized to update parameters of learner. The learning rate of classifier and bottleneck layer is 10 times of convolutional layers, which is widely adopted in domain adaptation (Long et al., 2015; Ganin et al., 2016). Following (Ganin et al., 2016), the learning rate of convolutional layer is adjusted by η = 0.001(1+10p)0.75 , where p is the optimizing progress linearly changing from 0 to 1. The learning rate α of inner loop optimization is set to 10−5. The parameters are updated for 20000 steps and the hardest val-subset is updated per 200 steps. The batchsize is set to 64. The running mean and running variance of Batch Normalization (BN) layers are fixed as the pre-trained values on ImageNet during training, which is discussed in (Du et al., 2020a). Due to memory limit, when implementing experiments based on ResNet50, we adopt the first order approximation (Finn et al., 2017) that stops the gradient of gtw in Eq. (4) for reducing memory and computational cost.
For the setting of SSDS, we conduct experiment based on ResNet18 on PACS. The implementation details are same as MSDS. For the setting of SSTD, we conduct experiment based on ResNet18 on CIFAR-10. The hyper-parameters of s andm are set to 8.0 and 0.2 respectively. Other implementation details are same as MSDS except that, in BN layers, the running mean and running variance are updated.
We implement experiments using Pytorch (Paszke et al., 2019) on a single NVIDIA Tesla P100 GPU.
C OPTIMIZATION ALGORITHM FOR FINDING THE HARDEST Sv
C.1 OPTIMIZATION ALGORITHM
To solve the problem of
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (11)
we design an alternative iteration algorithm in Sect. 3.2. Specifically, we initialize A with gradient of a sample randomly selected from S. Then we alternatively update Sv and A.
Given A, Sv is updated by solving
max Sv ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉
s.t. Sv ⊂ S, |Sv| = ξ, (12)
where the constraints are derived from the definition of Γξ. Equation (12) indicates that the optimal Sv consists of ξ samples that have the largest values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉. Thus we compute and rank the values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv .
Given Sv (St is then given), we update A to satisfy the constraint A = gtw in Eq. (11), then A is
A = gtw = 1 |St| ∑
(x,y)∈St
∇wl(fw(x), y). (13)
C.2 CONVERGENCE IN EXPERIMENTS
We show empirically the convergence of this alternative iteration algorithm in Fig. 4, with the values of objective function in Eq. (11). Figure 4 shows that the values of objective function converges after only a few iterations.
We also check if the splitting changes when the value of objective function converges. To do this, we count the ratio of changed sample indexes in Sv at each iteration, as in Table 11. Table 11 shows that the splitting is not changed when the value of objective function converges.
C.3 TOY EXAMPLE.
We present a toy example in this section to check if this algorithm can find the optimal solution. The toy example is an 2-dimensional classification problem, as shown in Fig. 5. Different colors indicate different classes. A fully-connected layer without bias is used as the network (learner). We split the data of the first class (blue points) with our algorithm. The candidate solutions with corresponding objective function values are given in Table 12.
The solutions in the iteration process of our algorithm are reported in Table 13. The solutions converge to (1,3,4), which is the optimal solution in Table 12. This indicates that our algorithm can find the
optimal splitting for the toy example. We also give the code of this toy example as below, and the reader may rerun it to verify the results.
Code the toy example:
import torch import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs
##### np.random.seed(1) data = make_blobs(n_samples=12,centers=2) x1 = data[0][data[1]==0] x2 = data[0][data[1]==1]
plt.plot(x1[:,0],x1[:,1],’*’) plt.plot(x2[:,0],x2[:,1],’*’) t = 0 for x in x1:
plt.text(x[0],x[1],str(t)) t+=1
plt.savefig(’samples.pdf’)
### model = torch.nn.Linear(2,2,bias=False).cuda() torch.nn.init.xavier_uniform_(model.weight) log = open(’log.txt’,’w’) log.write(’weight:’+ str(model.weight.cpu().data.numpy())+’\n’)
feat = torch.Tensor(data[0]).cuda() label = torch.LongTensor(data[1]).cuda() loss_fun = torch.nn.CrossEntropyLoss()
Grads = [] Loss = [] for i in range(len(feat)):
out = torch.nn.functional.linear(feat[i].view(1,-1),weight=model.weight,bias=None) loss_x = loss_fun(out,label[i].view(-1,))
grad = torch.autograd.grad(loss_x,model.weight)[0] Loss.append(loss_x.cpu().data.numpy()) Grads.append(grad.cpu().data.numpy())
##we split the data of the first class Loss = np.array(Loss)[data[1] == 0] Grads = np.array(Grads)[data[1] == 0] # # np.save(’toy_Grads.npy’,Grads) # np.save(’toy_Loss.npy’,Loss)
# Loss = np.load(’toy_Loss.npy’)[data[1] == 0] # Grads = np.load(’toy_Grads.npy’)[data[1] == 0] alpha = 0.001
def adv_loss(val_index=[0,1,2]): train_index = np.delete(np.arange(len(Loss)),val_index) loss = np.mean(Loss[val_index]) grad_val = np.mean(Grads[val_index],axis=0) grad_train = np.mean(Grads[train_index],axis=0)
return loss - alpha*np.sum(grad_val*grad_train)
#####brute force searching
Solutions = [] Values = []
#generate solutions for i in range(len(Loss)-2):
for j in range(i+1,len(Loss)-1): for k in range(j+1,len(Loss)):
Solutions.append([i,j,k]) for val_idex in Solutions:
Values.append(adv_loss(val_idex))
optimal_idx = np.array(Values).argmax() optimal_solution = Solutions[optimal_idx] optimal_value = max(Values)
print("all possible solutions:",Solutions) print("objective function values of possible solutions:",Values) print("optimal solution:",optimal_solution,"\nobjective function value
of optimal solution:",optimal_value)
log.write(str({"all possible solutions":Solutions, "objective function values of possible solutions":Values, "optimal solution":optimal_solution, "objective function value of optimal solution":optimal_value}))
###### our algorithm A = Grads[np.random.randint(len(Loss))] values_tmp = [] solutions_tmp = [] mtr_index =
np.random.choice(np.arange(len(Loss)),size=len(Loss)//2,replace=False) l = np.mean(Loss- alpha*np.sum(Grads*A.reshape((1,2,2)),axis=(1,2))) values_tmp.append(l) solutions_tmp.append(np.delete(np.arange(len(Loss)),mtr_index)) for i in range(5):
D = np.sum(Grads*A,axis=(1,2)) Loss_ = Loss-alpha*D idx_sort = np.argsort(Loss_)
mtr_index = idx_sort[:len(Loss) // 2] mte_index = idx_sort[len(Loss) // 2 :]
values_tmp.append(Loss_[mte_index].mean()) solutions_tmp.append(mte_index) A = np.mean(Grads[mtr_index],axis=0)
print("our optimal solution:",solutions_tmp[-1],"\nobjective function value of our optimal solution:",values_tmp[-1]) log.write(str({"our optimal solution":solutions_tmp[-1], "objective function value of our optimal solution":values_tmp[-1]})) print(solutions_tmp,values_tmp)
D PROOF OF THEOREM 1
We first introduce VC-dimension based generalization bound and domain adaptation theory in Appendix D.1, then present two lemmas in Appendix D.2, and finally give the proof of Theorem 1 in Appendix D.3.
D.1 PRELIMINARY
VC-dimension based generalization bound. Theorem A-1. (Abu-Mostafa et al., 2012) Let S be the set of training data i.i.d. sampled for distribution P . For any δ ∈ (0, 1), with probability at least 1− δ, we have ∀h (h : X → {0, 1}) in hypothesis spaceH,
| P(h)− ̂S(h)| ≤
√ 8
|S|
( V C(H) log 2e |S|
V C(H) +
4
δ
) . (14)
where P(h) = E(x,y)∼P [I{(h(x)) 6=y}] and ̂S(h) = 1|S| ∑ (x,y)∈S I{(h(x))6=y}.
Domain adaptation theory.
Theorem A-2. (Ben-David et al., 2007; 2010) For any h in hypothesis spaceH, we have
Q(h) ≤ P(h) + 1
2 dH(P,Q) + λ∗, (15)
where λ∗ ≥ infh′∈H{ P(h′) + Q(h′)} and
dH(P,Q) = 2 sup h∈H |EP [h = 1]− EQ[h = 1]| (16)
isH-divergence.
D.2 LEMMAS
Lemma A-1. For any Sv ∈ Γξ and St = S − Sv, ∀δ ∈ (0, 1), with probability at least 1 − δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| ≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) , (17)
where ΨP(f) = E(x,y)∼P [I{Ψ(f(x)) 6=y}] is generalization error on distribution P , ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} is empirical error, H Ψ St
= {Ψ ◦ f : f ∈ HSt}, V C(HΨSt) is the VC-dimension of HΨSt , and Ψ(·) is the prediction rule such as the Bayes Optimal Predictor, i.e., Ψ(f(x)) = I{f(x)≥ 12}.
Proof:
From the definition of HΨSt , for any f ∈ HSt , there exists a hf ∈ H Ψ St such that hf = Ψ ◦ f . Applying Theorem A-1, with probability at least 1− δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| =| P(hf )− ̂Sv (hf )|
≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(18)
Lemma A-2. For any Sv ∈ Γξ and St = S − Sv, let ΨP(g) = inff∈HSt Ψ P(f) and ̂ Ψ Sv (h) = inff∈HSt ̂ Ψ Sv (f), then ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g) ≥ ̂ΨSv (h)− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) . (19)
Proof:
From the definition of g and h, we have ̂ΨSv (g) ≥ ̂ Ψ Sv (h). ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g)− ̂ΨSv (h) = Ψ P(g)− ̂ΨSv (g) + ̂ Ψ Sv (g)− ̂ Ψ Sv (h)
≥ ΨP(g)− ̂ΨSv (g)
≥− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(20)
Thus, Eq. (19) holds.
D.3 PROOF OF THEOREM 1
Proof:
We denote byHΨl the hypothesis space such that ∀h ∈ HΨl ,
h(x) = Ψl(f(x)) = { 1 if l(f(x), y) > γ, 0 otherwise ,
(21)
for f ∈ H. Then
dHΨl (P,Q) = 2 sup h∈HΨl ∣∣EP [h = 1]− EQ[h = 1]∣∣ = 2 sup
f∈H ∣∣EP [Ψl(f(x)) = 1]− EQ[Ψl(f(x)) = 1]∣∣ = 2 sup
f∈H ∣∣∣EP [I{l(f(x),y)>γ}]− EQ[I{l(f(x),y)>γ}]∣∣∣ = 2 sup
f∈H
{ EQ[I{l(f(x),y)>γ}]− EP [I{l(f(x),y)>γ}] } ≤ 2 sup
f∈H EQ[I{l(f(x),y)>γ}]− 2 inf f∈H EP [I{l(f(x),y)>γ}].
(22)
In the fourth equation, we utilize the assumption that EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}]. Given any Sv ∈ Γξ and St = S − Sv , we replaceH byHSt , then
dHΨlSt (P,Q) ≤2 sup f∈HSt EQ[I{l(f(x),y)>γ}]− 2 inf f∈HSt EP [I{l(f(x),y)>γ}]
≤2C1 − 2 inf f∈HSt
EP [I{l(f(x),y)>γ}] (23)
where C1 = supS′v∈Γξ supf∈HS−S′v EQ[I{l(f(x),y)>γ}]. Applying Theorem A-2, for any f ∈ HSt , we have ΨlQ (f) ≤ Ψl P (f) + C1 − inf
f ′∈HSt EP [I{l(f ′(x),y)>γ}] + λ∗(Sv), (24)
where λ∗(Sv) ≥ inff ′∈HS−Sv { Ψl P (f ′) + ΨlQ (f ′)}. Let C3 = supS′v∈Γξ λ ∗(S′v) ≥ supS′v∈Γξ inff ′∈HS−S′v { ΨlP (f ′) + Ψl Q (f ′)}, we have
ΨlQ (f) ≤ Ψl P (f) + C1 − inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] + C3. (25)
Applying Lemma A-1 to the first term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have ∀f ∈ HSt ,
ΨlP (f) ≤ ̂ Ψl Sv (f) + √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) . (26)
Applying Lemma A-2 to the third term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have
inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] ≥ inf f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}
− √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) .
(27)
Combining Eq. (25), (26), (27) and thanks to the union bound, for any δ ∈ (0, 1), with probability at least 1− 2δ, we have ∀f ∈ HSt ,
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2 √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) + C3, (28)
where B(Sv) = C1 − inff ′∈HSt 1 |Sv| ∑ (x,y)∈Sv I{l(f ′(x),y)>γ}. Using the fact that |Sv| = ξ and let C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl
S−S′v ) , we have
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
|Sv|
( C2 + 4
δ
) + C3. (29) | 1. What is the focus of the paper regarding domain generalization?
2. What are the strengths of the proposed approach, particularly in its novelty and theoretical bounds?
3. What are the weaknesses of the paper, especially regarding its convergence issues and incremental contribution?
4. How does the reviewer assess the performance gain of the proposed method, and what are their concerns? | Review | Review
########################################################################## Summary:
The paper proposes a new approach for domain generalization which minimizes the generalization error across train-validation split with the largest domain gap. The paper further gives theoretical bound on the generalization error of the proposed method.
########################################################################## Pros:
The paper proposes a new approach for domain generalization. The idea of perform meta-learning from the train to validation split is already employed by previous works (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019). The novelty of the paper falls mostly on performing meta-learning on the train-validation split with the largest domain gap.
The paper gives a theoretical bound on the generalization error to unseen target domain.
########################################################################## Cons:
The paper needs to update the model parameter, the initialized parameters and the train-validation split, which may not converge or converges very slowly. Though empirical results show that the convergence of the method is fast, some theoretical demonstrations are needed for the convergence speed of different updates.
The contribution of the paper is a little incremental. Meta-learning based domain generalization methods are not novel. The only novelty of the paper is performing meta-learning on the train-validation split with the largest domain gap, which is an incremental contribution over meta-learning based domain generalization (demonstrated by the ablation study).
As shown in Table 1, Baseline w/L2 outperforms Baseline by 2.3% but DFAS outperforms Baseline w/L2 only by 2.7%. This means that the main performance gain is from L2 while the other contributions has very small performance gain (less than 1%). However, L2-normalization (Finn et al., 2017; Dou et al., 2019) is a widely-used techniques for meta-learning and domain generalization, which is not counted as a novel contribution for the paper. The authors need to further demonstrate that the main contribution: meta-learning on the train-validation split with the largest domain gap, has huge performance gain.
########################################################################## I lean to rejecting the paper since the performance gain is mostly falls on the L2-normalization but the main contribution of the paper: performing meta-learning across the train-val split with the largest domain gap, does not have much performance gain. |
ICLR | Title
Domain-Free Adversarial Splitting for Domain Generalization
Abstract
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
N/A
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
1 INTRODUCTION
Deep learning approach has achieved great success in image recognition (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). However, deep learning methods mostly succeed in the case that the training and test data are sampled from the same distribution (i.e., the i.i.d. assumption). However, this assumption is often violated in real-world applications since the equipments/environments that generate data are often different in training and test datasets. When there exists distribution difference (domain shift (Torralba & Efros, 2011)) between training and test datasets, the performance of trained model, i.e., learner, will significantly degrade.
To tackle the domain shift issue, domain adaptation approach (Pan & Yang, 2010; Daume III & Marcu, 2006; Huang et al., 2007) learns a transferable learner from source domain to target domain. Domain adaptation methods align distributions of different domains either in feature space (Long et al., 2015; Ganin et al., 2016) or in raw pixel space (Hoffman et al., 2018), which relies on unlabeled data from target domain at training time. However, in many applications, it is unrealistic to access the unlabeled target data, therefore this prevents us to use domain adaptation approach in this setting, and motivates the research on the learning problem of domain generalization.
Domain generalization (DG) approach (Blanchard et al., 2011; Muandet et al., 2013) commonly uses several source domains to train a learner that can generalize to an unseen target domain. The underlying assumption is that there exists a latent domain invariant feature space across source domains and unseen target domain. To learn the domain invariant features, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b) explicitly align distributions of different source domains in feature space. (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019) split source domains into meta-train and meta-test to simulate domain shift and train learner in a meta-learning approach. (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Ryu et al., 2020) augment images or features to train learner to enhance generalization capability.
Conventional domain generalization methods assume that the domain labels are available. But in a more realistic scenario, the domain labels may be unknown (Wang et al., 2019). To handle this domain-free setting, Carlucci et al. (2019) combines supervised learning and self-supervised learning to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and trains a domain invariant feature extractor via adversarial training. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Another line of works (Volpi et al., 2018; Qiao et al., 2020) tackle the single source setting that the training set comprises a single domain, and the train and test data are from different domains.
In this work, we focus on a general learning scenario of domain generalization as follows. First, we do not know the domain label of each data and do not assume that there are several domains in the training dataset. Second, we do not assume that the training and test data are from different domains (e.g., styles). However, the previous domain-free DG methods (Matsuura & Harada, 2020) commonly evaluate on the datasets (e.g., PACS) composed of several domains though they do not use domain labels in training.
In our domain-free setting, we do not assume and know the domains in the training dataset, we therefore model domain generalization as a learning problem that the learner should be able to generalize well for any splitting of train/val subsets, i.e., synthetic source/target domains, over the training dataset. This explicitly enforces that the trained learner should be generalizable for any possible domain shifts within the training dataset.
To achieve this goal, we propose an adversarial splitting model that is a min-max optimization problem, due to the difficulty of enumerating all splittings. In this min-max problem, we adversarially split training dataset to train/val subsets by maximizing the domain shift between them based on the given learner, and then update learner by minimizing the prediction error on val-subset using meta-learning approach given the splitting. By optimizing this min-max problem, we enforce the learner to generalize well even in the worst-case splitting. We also investigate L2-normalization of features in our domain generalization method. It is surprisingly found that L2-normalization can improve performance of learner and mitigate gradient explosion in the meta-learning process of DG. We further theorectically analyze the underlying reasons for this finding. This proposed domain generalization approach is dubbed Domain-Free Adversarial Splitting, i.e., DFAS.
To verify the effectiveness of our method, we conduct extensive experiments on benchmark datasets of PACS, Office-Home and CIFAR-10 under different settings with multiple/single source domains. In experiments that the training data are from several source domains, our method achieves state-ofthe-art results on both PACS and Office-Home datasets. We also find that our method significantly outperforms baselines in experiments that the training data are from a single source domain on PACS and CIFAR-10. We also confirm the effectiveness of our method by ablation study.
Based on domain adaptation theory, we also derive an upper bound of the generalization error on unseen target domain. We analyze that the terms in this upper bound are implicitly minimized by our method. This theoretical analysis partially explains the success of our method.
2 RELATED WORKS
We summarize and compare with related domain generalization (DG) methods in two perspectives, i.e., DG with domain labels and DG without domain labels.
DG with domain labels. When the domain labels are available, there are three categories of methods for DG. First, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b; Piratla et al., 2020) learn domain invariant features by aligning feature distributions or by common/specific feature decomposition. Second, (Li et al., 2019a; Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019; Du et al., 2020a;b) are based on meta-learning approach that splits given source domains into meta-train and meta-test domains and trains learner in an episodic training paradigm. Third, (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Wang et al., 2020) augment fake domain data to train learner for enhancing generalization capability of learner. Our method may mostly relate to the above second category of methods. But differently, we consider the DG problem in domain-free setting and adversarially split training dataset to synthesize domain shift in a principled min-max optimization method, instead of using leave-one-domain-out splitting in these methods.
DG without domain labels. When the domain label is unavailable, to enhance generalization ability of learner, Wang et al. (2019) extracts robust feature representation by projecting out superficial patterns like color and texture. Carlucci et al. (2019) proposes to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and learns domain invariant features via adversarial training of feature extractor and domain discriminator. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Volpi et al. (2018) and Qiao et al. (2020) propose adversarial data augmentation to tackle the setting that the training set comprises a single domain. In methodology, these methods either explicitly force the learner to extract robust features (Wang et al., 2019; Matsuura & Harada, 2020; Huang et al., 2020) or augment new data to increase training data (Carlucci et al., 2019; Qiao et al., 2020; Volpi et al., 2018). While our method is a novel meta-learning approach for DG by introducing adversarial splitting of training dataset during training, without relying on data/domain augmentation.
3 METHOD
In our setting, since we do not assume and know the domains in the training dataset, the training data could be independently sampled from several underlying source domains or just from a single source domain. We denote S = {(xi, yi)}Ni=1 as the training dataset. Our goal is to train the learner with S that can generalize well on an unseen target domain.
In the following sections, we introduce details of our proposed model in Sect. 3.1, followed by its optimization method in Sect. 3.2. We also investigate L2-normalization for domain generalization in Sect. 3.3. Theoretical analysis for our method is presented in Sect. 4. Experimental results are reported in Sect. 5. Sect. 6 concludes this paper.
3.1 DOMAIN-FREE ADVERSARIAL SPLITTING MODEL
As mentioned in Sect. 1, we model DG as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. The learner is trained using meta-learning approach (Finn et al., 2017). To formulate our idea mathematically, we first introduce some notations. We denote f as a function/learner (f could be a deep neural network, e.g., ResNet (He et al., 2016)) that outputs classification score of the input image, l as the loss such as cross-entropy, St and Sv as the train-subset and val-subset respectively such that S = St ∪ Sv and St ∩ Sv = ∅. The formulated optimization problem for domain generalization is
min w
1 |Γξ| ∑ Sv∈Γξ L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(1)
In Eq. (1), Γξ = {Sv : Sv ⊂ S, |Sv| = ξ} is the set of all possible val-subsets of S with length of ξ, St = S − Sv is train-subset paired with each Sv , L(θ(w);Sv) = 1|Sv| ∑ (x,y)∈Sv l(fθ(w)(x), y) is the loss on Sv, where θ(w) is the parameters of f , L(θ;St, w) is L(θ;St) with θ initialized by w andR(w) is regularization term. In the optimization model of Eq. (1), the parameter θ(w) of learner trained on St is treated as a function of the initial parameter w. To force the learner trained on St to generalize well on Sv, we directly minimize the loss L(θ(w);Sv), dubbed generalization loss, on val-subset Sv, w.r.t. the parameter θ(w) trained on St. Solving Eq. (1) will force the learner to be able to generalize well from any train-subset to corresponding val-subset.
Since |Γξ| may be extremely large, it is infeasible to enumerate all possible train/val splittings. Thus, we propose the following adversarial splitting model instead,
min w max Sv∈Γξ
L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(2)
In the min-max problem of Eq. (2), the train/val (St/Sv) splitting is optimized to maximize the generalization loss to increase the domain shift between train and val subsets by finding the hardest splitting to the learner. While w is optimized by minimizing the generalization loss of learner over
the splitting. Solving this adversarial splitting optimization model in Eq. (2) enforces the learner to be generalizable even for the worst-case splitting. We therefore expect that the trained learner is robust to the domain shifts within the training dataset. For the regularization termR(w), we set it to be the training loss on St (i.e.,R(w) = L(w;St)), which additionally constrains that the learner with parameter w should be effective on St (Li et al., 2018a). The effect of the hyper-parameter ξ will be discussed in Appendix A.2.
In conventional adversarial machine learning, adversarial training is imposed on adversarial samples and learner to increase robustness of the learner to adversarial corruption (Goodfellow et al., 2015). While in our optimization model of Eq. (2), adversarial training is conducted on data splitting and learner to force the learner to be robust to domain shift between train/val subsets. Our model bridges adversarial training and meta-learning. It is a general learning framework for domain generalization and is a complement to adversarial machine learning.
3.2 OPTIMIZATION
This section focuses on the optimization of Eq. (2). Since Eq. (2) is a min-max optimization problem, we alternately update Sv and w by fixing the other one as known. We should also consider the inner loop for optimization of θ(w) in the bi-layer optimization problem of Eq. (2). We next discuss these updating steps in details. The convergence and computational cost of this algorithm will be also discussed in Appendix A.3 and A.4 respectively.
Inner loop for optimization of θ(w). We adopt finite steps of gradient descent to approximate the minimizer θ(w) of the inner objective L(θ;St) with initial value w. This approximation technique has been introduced in machine learning community several years ago (Sun & Tappen, 2011; Finn et al., 2017; Fan et al., 2018). For convenience of computation, following (Li et al., 2018a; Dou et al., 2019), we only conduct gradient descent by one step as
θ(w) = w − α∇θL(θ;St)|θ=w, (3)
where α is the step size of inner optimization and its effect will be discussed in Appendix A.2.
Optimizingw with fixed Sv . For convenience, we denote gtw = ∇θL(θ;St)|θ=w. Fixing Sv (St is then fixed), w can also be updated by gradient descent, i.e.,
w = w − η∇w ( L(w − αgtw;Sv) +R(w) ) , (4)
where η is the step size of outer optimization.
Finding the hardest splitting Sv with fixed w. Fixing w, to find Sv ∈ Γξ to maximize L(w − αgtw;Sv), we do first order Taylor expansion for L(w−αgtw;Sv) by L(w−αgtw;Sv) ≈ L(w;Sv)− α 〈gtw, gvw〉, where gvw = ∇θL(θ;Sv)|θ=w and 〈·, ·〉 denotes the inner product. From the definition of L, gtw and gvw, the optimization problem of maxSv∈Γξ{L(w;Sv) − α 〈gtw, gvw〉} can be written as maxSv∈Γξ{ 1|Sv| ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), g t w〉}. This problem is equivalent to the following splitting formulation:
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (5)
where we introduced an auxiliary variable A. Eq. (5) can be solved by alternatively updating Sv and A. Given A, we compute and rank the values of l (fw(x), y) − α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv. Given Sv (St is then given), we update A by A = gtw = 1 |St| ∑ (x,y)∈St ∇wl(fw(x), y). We also discuss details and convergence of this alternative iteration in Appendix C. Since computing gradient w.r.t. all parameters is time and memory consuming, we only compute gradient w.r.t. parameters of the final layer of learner f .
3.3 L2-NORMALIZATION FOR EXTRACTED FEATURE
L2-normalization has been used in face recognition (Liu et al., 2017; Wang et al., 2018) and domain adaptation (Saito et al., 2019; Gu et al., 2020), but is rarely investigated in domain generalization. We investigate L2-normalization in domain generalization in this paper. It is found surprisingly in experiments that L2-normalization not only improves the performance of learner (see Sect. 5.4), but
also mitigates gradient explosion (see Sect. 5.5) that occurs frequently during the training of metalearning for DG (Finn et al., 2017; Dou et al., 2019). We next discuss details of L2-normalization in our method and analyze why L2-normalization mitigates gradient explosion.
Feature L2-normalization. The L2-normalization is used as a component of our learner f . Specifically, we decompose f into feature extractor fe (e.g., the convolutional layers of ResNet), the transform fn representing L2-normalization and classifier f c, i.e., f = f c ◦ fn ◦ fe. The feature of input image x extracted by fe is fed to fn to output an unit vector z, i.e., z = fn(fe(x)) = f
e(x) ‖fe(x)‖ .
The classifier f c consists of unit weight vectors W = [w1, w2, · · · , wK ], where K is the number of classes and ‖wk‖ = 1,∀k. f c takes z as an input and outputs the classification score vector σm,s(W T z). σm,s(·) is the marginal softmax function defined by
[σm,s(W T z)]k = exp(s(wTk z −mI{k=y}))∑K k′=1 exp(s(w T k′z −mI{k′=y})) , k = 1, 2, · · · ,K, (6)
where y is the label of x, [·]k indicates the k-th element, I{a} is the indicator function that returns 1 if a is true, 0 otherwise, m and s are hyper-parameters indicating margin and radius respectively.
Analysis of mitigating gradient explosion. We find that L2-normalization mitigates gradient explosion in the training of meta-learning for domain generalization. For the sake of simplicity, we analyze gradient norm of loss w.r.t. parameters of f c in the meta-learning process of domain generalization, with fe as fixed function. Without loss of generality, we consider the case that K = 2 (i.e., binary classification), s = 1 and m = 0. In this case, we have the following proposition. Proposition 1. Under the above setting, if the input feature of f c is L2-normalized, the gradient norm of loss w.r.t. parameters of f c in the meta-learning process of DG is bounded.
Sketch of proof. Given feature z, the loss of binary classification is L(w; z) = −y log(σ(wT z))− (1 − y) log(1 − σ(wT z)), where σ is the sigmoid function. Let w′ = w − α∇wL(w; z), then ∇wL(w′; z) = (I − αH)∇w′L(w′; z), where H is the Hessian matrix. The gradient norm ‖∇wL(w′; z)‖ ≤ ‖I − αH‖ ‖∇w′L(w′; z)‖ ≤ (1 + |α| ‖H‖) ‖∇w′L(w′; z)‖. Since ∇w′L(w′; z) = (p− y)z and H = p(1− p)zzT where p = σ(wT z), ‖H‖ = supu:‖u‖=1 ‖Hu‖ ≤ supu:‖u‖=1
∥∥zzTu∥∥ ≤ ‖z‖2 and ‖∇w′L(w′; z)‖ ≤ ‖z‖. If ‖z‖ = 1, we have ‖∇wL(w′; z)‖ ≤ 1 + |α|. According to Proposition 1, L2-normalization can mitigate gradient explosion under the above setting. The analysis of gradient norm of loss w.r.t. parameters of both f c and fe in the meta-learning process is much more complex, left for our future work.
4 THEORETICAL ANALYSIS
This section presents theoretical understanding of our method. We first derive a generalization error bound on target domain in theorem 1 for the general setting of meta-learning for DG. Then, based on theorem 1, we theoretically explain the reason on the success of our method.
Without loss of generality, we consider binary classification problem. We denoteH as the set of all possible f , i.e., H = {fw : w ∈ RM}, where M is the number of parameters. For any Sv ∈ Γξ and St = S − Sv, we let HSt = {fθ(w) : θ(w) = arg minθ L(θ;St, w), w ∈ RM}. The metalearning approach for DG is to find a function in HSt to minimize classification loss on Sv. Note that, although the training samples in S may be sampled from several distributions, they can still be seen as being i.i.d. sampled from a mixture of these distributions. We next respectively denote P = ∑D d=1 βdPd as the mixture distribution with βd representing the sampling ratio of the d-th source domain, ΨQ(f) = E(x,y)∼Q[I{Ψ(f(x))6=y}] as the generalization error on distribution Q of unseen target domain, ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} as the empirical error on Sv, V C(H) as the VC-dimension of H, and Ψ(·) as the prediction rule such as the Bayes Optimal Predictor. Based on the domain adaptation theory (Ben-David et al., 2007; 2010) and inspired by the analysis in (Saito et al., 2019), we have the following theorem. Theorem 1. Let γ be a constant, assume EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}], then given any Sv ∈ Γξ and St = S−Sv and for any δ ∈ (0, 1), with probability at least 1−2δ, we have ∀f ∈ HSt ,
Table 1: Results of MSDS experiment on PACS based on ResNet18 and ResNet50.
Backbone Target D-SAM JiGen MASF MMLD MetaReg RSC Base-line DFAS (ours)
ResNet18
A 77.3 79.4 80.3 81.3 83.7 83.4 80.2±0.4 84.2±0.1 C 72.4 75.3 77.2 77.2 77.2 80.3 75.5±0.5 79.5±0.3 P 95.3 96.0 95.0 96.1 95.5 96.0 95.9±0.1 95.8±0.1 S 77.8 71.4 71.7 72.3 70.3 80.9 70.1±0.9 82.1±0.4
Avg 80.7 80.5 81.0 81.8 81.7 85.1 80.4 85.4
ResNet50
A - - 82.9 - 87.2 87.9 86.1±0.2 89.1±0.1 C - - 80.5 - 79.2 82.2 79.2±0.4 84.6±0.2 P - - 95.0 - 97.6 97.9 97.6±0.1 96.8±0.2 S - - 72.3 - 70.3 83.5 70.3±0.7 85.6±0.3
Avg - - 82.7 - 83.6 87.8 83.3 89.0
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
ξ
( C2 + 4
δ
) + C3, (7)
where B(Sv) = C1 − inf
f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}, (8)
C1 = supS′v∈Γξ supf ′∈HS−S′v EQ[I{l(f ′(x),y)>γ}], C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl S−S′v ) , C3 ≥ supS′v∈Γξ inff ′∈HS−S′v { Ψl P (f ′) + ΨlQ (f ′)}, HΨlS−St = {Ψl ◦ f : f ∈ HS−St}, Ψl is a loss-related indicator defined by
Ψl(f(x)) = { 1 if l(f(x), y) > γ. 0 otherwise .
(9)
Proof is given in Appendix D. The assumption of EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}] in theorem 1 is realistic because the data of Q is not accessed at training time, and the learner trained on data of P should have smaller classification loss on P thanQ. In theorem 1, C1, C2, C3 are constants to f . In Eq. (7), the generalization error ΨlQ (f) on Q can be bounded by the empirical error ̂ Ψl Sv
(f) on Sv, the term B(Sv) that measures the discrepancy between P and Q, and the last two constant terms in Eq. (7).
To obtain lower ΨlQ (f), we need to minimize ̂ Ψl Sv (f) and B(Sv). Minimizing B(Sv) w.r.t. Sv is equivalent to
max Sv∈Γξ inf f∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f(x),y)>γ}. (10)
Intuitively, Eq. (10) means to find a Sv ∈ Γξ such that the infimum ratio of examples in Sv having loss greater than γ is maximized. This min-max problem of Eq. (10) for computing the error bound bears the similar idea as our min-max problem. Our adversarial splitting model in Eq. (2) can implicitly realize the goal of Eq. (10) and meanwhile ensure lower ̂ΨlSv (f) for any Sv .
The maximization in Eq. (10) corresponds to our adversarial splitting that finds the hardest val-subset Sv for the learner in Eq. (2). The infimum in Eq. (10) corresponds to the minimization of the loss in Eq. (2) on Sv w.r.t. the learner parameterized by θ(w). Instead of using indicator function I in Eq. (10), in our model of Eq. (2), we choose differentiable classification loss for easier optimization.
5 EXPERIMENTS
We verify the effectiveness of our method in three types of experimental settings: Multi Source with Domain Shift (MSDS) that the training data are from several source domains and there exists domain
shift between training and test data, Single Source with Domain Shift (SSDS) that the training data are from a single source domain and there exists domain shift between training and test data, and Same Source and Target Domain (SSTD) that the training and test data are from a same single domain. The source codes will be released online.
We conduct experiments on three benchmark datasets. PACS (Li et al., 2017) contains four domains, including art painting (A), cartoon (C), photo (P), sketch (S), sharing seven classes. OfficeHome (Volpi et al., 2018), a dataset widely used in domain adaptation and recently utilized in domain generalization, consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), Real World (Rw), sharing 65 classes. Both of these two datasets are utilized to conduct experiments in settings of MSDS and SSDS. CIFAR-10 (Krizhevsky et al., 2009) is taken for the experimental setting of SSTD.
5.1 TYPE I: MULTI SOURCE WITH DOMAIN SHIFT (MSDS)
In the setting of MSDS, following (Carlucci et al., 2019), we use leave-one-domain-out crossvalidation, i.e., training on three domains and testing on the remaining unseen domain, on PACS and Office-Home. Note that the domain labels are not used during training. We adopt ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. Full implementation details are reported in Appendix B.
We compare our method with several state-of-the-art methods, including D-SAM (D’Innocente & Caputo, 2018), JiGen (Carlucci et al., 2019), MASF (Dou et al., 2019), MMLD (Matsuura & Harada, 2020), MetaReg (Balaji et al., 2018), RSC (Huang et al., 2020). The results on PACS and Office-Home are reported in Table 1 and Table 2 respectively. Our DFAS achieves state-of-the-art results based on both ResNet18 and ResNet50 on both PACS (85.4%, 89.0%) and Office-Home (64.3%, 70.3%), outperforming RSC by 0.3% and 1.2% on PACS based on ResNet18 and ResNet50 respectively, and by 1.2% on Office-Home based on ResNet18 (these methods do not conduct experiment on Office-Home using ResNet50). Compared with Baseline that directly aggregates three source domains to train learner with standard fully-connected layer as classifier f c, our method of DFAS improves its performance by 5.0% and 5.7% on PACS based on ResNet18 and ResNet50 respectively, and by 2.1% and 2.1% on Office-Home based on ResNet18 and ResNet50 respectively. In Table 1, on PACS, DFAS significantly outperforms Baseline in almost all tasks except when P is taken as target domain. Note that, in the task that domain S, of which the style is extremely different from rest three domains, is target domain, our DFAS boosts the accuracy of Baseline by 12.0% and 15.3% based on ResNet18 and ResNet50 respectively. This indicates that our method can generalize well when domain shift is large. Office-Home is challenging for DG since the number of classes is larger than other datasets. As shown in Table 2, our DFAS outperforms Baseline stably in almost all tasks on Office-Home. These performance improvements demonstrate the effectiveness of our method in the case that training data are from multi-source domains and the unseen target domain is different from source domains.
5.2 TYPE II: SINGLE SOURCE WITH DOMAIN SHIFT (SSDS)
We conduct this type of experiment on PACS based on ResNet18. In this experiment, we train learner on one domain and test on each of the rest three domains, resulting in total 12 tasks. Implementation details are shown in Appendix B. Our method is compared with related methods, including Baseline that directly trains learner with standard fully-connected layer as classifier f c on the source domain, Jien (Carlucci et al., 2019) and SagNet (Nam et al., 2019). Results are reported in Table 3. Our method of DFAS outperforms Baseline and SagNet by 4.2% and 4.7% respectively. We observe that our method outperforms Baseline in 10 tasks among all 12 tasks. The performance boosts are large in tasks when domain S is set to be source domain. These performance improvements demonstrate the effectiveness of our method in the case that training data are from single source domain and the unseen target domain is different from source domain.
5.3 TYPE III: SAME SOURCE AND TARGET DOMAIN (SSTD)
We also apply our DG method to the common recognition task that the training and test data are from a same domain, i.e., SSTD, on CIFAR-10 dataset. To investigate the effect of training size, we sample different sizes of training data from the provided training set (i.e., source domain). Implementation details are in Appendix B. As shown in Table 4, our DFAS outperforms Baseline, JiGen and MMLD by 2.4%, 3.6% and 4.3% respectively on average. The results of JiGen and MMLD are obtained by running their codes on CIFAR-10. We observe that DFAS outperforms Baseline and compared methods in all different numbers of training data. In general, the performance boost is larger when the number of training data is smaller. This may be because the learner is more possible to be overfitting when the training size is smaller and our DFAS is designed to extract better generalizable features.
5.4 ABLATION STUDY
To further verify the effectiveness of each component of our method, we conduct additional ablation experiments on PACS dataset based on ResNet18 in both MSDS and SSDS setting. The results are reported in Table 5 and Table 6.
In Table 5 and 6, L2-norm denotes feature L2-normalization defined in Sect.3.3. Rand-split means the random splitting strategy that we randomly split the train/val subsets at each step of updating the parameters of learner. Adv-split denotes the adversarial splitting model that we update the worst-case splitting by solving the maximization problem in Eq. (5) per epoch in training process.
Effectiveness of L2-normalization. In Table 5, L2-norm (82.7%) outperforms Baseline (80.4%) by 2.3% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms Adv-split (83.6%) by 1.8% in MSDS setting. In Table 6, L2-norm (63.1%) outperforms Baseline (62.4%) by 0.7% and L2norm + Adv-split (i.e., DFAS) (66.6%) outperforms Adv-split (65.1%) by 1.5% in SSDS setting.
These performance improvements demonstrate that the feature L2-normalization is useful in domain generalization.
Effectiveness of adversarial splitting model. In Table 5, Adv-split (83.6%) outperforms Baseline (80.4%) by 3.2% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms L2-norm (82.7%) by 2.7% in MSDS setting. In Table 6, Adv-split (65.1%) outperforms Baseline (62.4%) by 2.7% and L2-norm + Adv-split (i.e., DFAS) (66.6%) outperforms L2-norm (63.1%) by 3.5% in SSDS setting. These results indicating that our proposed adversarial splitting model is effective.
Effectiveness of adversarial splitting over random splitting. In Table 5, L2-norm + Adv-split (85.4%) outperforms L2-norm + Rand-split (84.3%) by 1.1% and Adv-split (83.6%) outperforms Rand-split (82.8%) by 0.8% in MSDS setting. In Table 5, L2-norm + Adv-split (66.6%) outperforms L2-norm + Rand-split (65.4%) by 1.2% and Adv-split (65.1%) outperforms Rand-split (64.1%) by 1.0% in SSDS setting. These results demonstrate that the adversarial splitting model outperforms the random splitting strategy in different experimental settings.
Due to space limit, we add more ablation experiments in Appendix A.1 to further compare different splittings, including adversarial splitting, domain-label-based splitting and random splitting.
5.5 MITIGATING GRADIENT EXPLOSION BY L2-NORMALIZATION.
To show that L2-normalization can mitigate gradient explosion, we conduct the same experiments independently for 50 times respectively with L2-normalization and without L2-normalization. Then we count the numbers of occurrences of gradient explosion that are reported in Table 7. From Table 7, we can observe that L2-normalization can mitigate gradient explosion.
6 CONCLUSION
In this paper, we unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework to tackle the general domain generalization problem. Extensive experiments show the effectiveness of the proposed method. We are interested in deeper theoretical understanding and more applications of our method in the future work.
A ANALYSIS
A.1 COMPARISON OF ADVERSARIAL SPLITTING AND DOMAIN-LABEL-BASED SPLITTING
In the section, we compare our adversarial splitting with domain-label-based splitting that is commonly used in meta-learning-based DG methods. Due to variations of style, poses, sub-classes, etc., the internal inconsistency within dataset is complicated. Domain-label partially capture the inconsistency, while cannot cover all possible internal inconsistency. Our adversarial splitting method does not rely on the domain label. It iteratively finds the hardest train/val splitting to the learner to maximize the inconsistency and train the learner to generalize well for the hardest splitting, in an adversarial training way. This strategy more flexibly investigates the possible inconsistency within training dataset, adaptively to the learner, and can potentially enhance the generalization ability of learner.
We first empirically show that the domain-label-based splitting (denoted as Label-split) is not as hard as our adversarial splitting (Adv-split) to the learner in Table 8. In Table 8, we report the values of objective function in Eq. (5) of Adv-split and Label-split by fixing the learner with different network parameters wi at different epoch (1th, 2th, 5th and 10th) in the training process. Larger value in the table indicates that the splitting is harder to the learner (i.e., network). It can be observed that the domain-label-based splitting (Label-split) is not as hard as Adv-split to learner.
We also conduct experiments on PACS in MSDS setting to fairly compare different splittings, including adversarial splitting (Adv-split), domain-label-based splitting (Label-split) and random splitting (Rand-split). The results are reported in Table 9. Table 9 shows that adversarial splitting outperforms random splitting and domain-label-based splitting when training data is from multiple domains.
When the training data are from only a single domain, our adversarial splitting also performs well (as in Table 3). However, domain-label-based splitting cannot be used in this setting, since there is no domain label available.
A.2 EFFECT OF HYPER-PARAMETERS
Effect of hyper-parameter ξ. In Fig. 1(a), we show the performance of our method when varying the hyper-parameter ξ, i.e., length of the val-subset Sv in adversarial splitting of training dataset. The best result is obtained when ξ = |S|2 , and the results are similar when ξ |S| ranges from 0.3 to 0.7.
Effect of hyper-parameter α. We evaluate the effect of α in MSDS setting on PACS dataset in Fig. 1(b). From Fig. 1(b), the ACC is stable to the values of α in large range of 1e-6 to 1e-4. Small α results in small step-size for parameter updating in meta-learning framework, and limits the benefits from meta-learning and adversarial splitting. Larger α results in larger step-size for gradient descent based network updating, which may fail to decrease the training loss from the optimization perspective.
Effect of hyper-parameter m. The effect of m is evaluated in MSDS setting on PACS dataset in Fig. 1(c). Fig. 1(c) shows that the result is not sensitive to the value of m.
A.3 CONVERGENCE
We testify the convergence of DFAS with errors and losses in different tasks in Fig. 2. In Fig. 2(a) and Fig. 2(b) , we show the classification error curves on target domains (A and Ar respectively) in the setting of MSDS. In Fig. 2(c), we show the training loss of DFAS in task A in MSDS setting. These training curves indicates that DFAS converges in the training process. We also observe that DFAS has better stability than Baseline, in Fig. 2(a) and Fig. 2(b).
A.4 COMPUTATIONAL COST OF ADVERSARIAL SPLITTING AND RANDOM SPLITTING
We compare the computational cost of adversarial splitting and random splitting in this section. Since we only update the worst-case splitting per epoch, instead of at each step of updating parameters, the computational cost is only slightly higher than that of random splitting. To show this, we compare the total training times of the adversarial splitting and random spitting in the same number of steps (20000), as in Table 10.
From Table 10, the training time of Adv-split is only 5.6% (0.33/5.90) higher than Rand-split.
A.5 VISUALIZATION OF FEATURE SPACE
We visualize the feature space learned by our method of DFAS and Baseline (shown in Fig. 3), by t-SNE (Maaten & Hinton, 2008). It appears that DFAS yields better separation of classes and better alignment of distributions of source and unseen target domains, which possibly explains the accuracy improvements achieved by our DFAS.
B IMPLEMENTATION DETAILS
For the setting of MSDS, we use ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. The dimension of the bottleneck layer is set to be 512 when the backbone is ResNet18 as in (Saito et al., 2019), and 256 when the backbone is ResNet50 as in (Gu et al., 2020). Following (Gu et al., 2020), s is set to 7.5 for PACS and 10.0 for Office-Home. m is set to 0.2 for PACS and 0.1 for Office-Home. ξ is set to |S|2 . SGD with momentum of 0.9 is utilized to update parameters of learner. The learning rate of classifier and bottleneck layer is 10 times of convolutional layers, which is widely adopted in domain adaptation (Long et al., 2015; Ganin et al., 2016). Following (Ganin et al., 2016), the learning rate of convolutional layer is adjusted by η = 0.001(1+10p)0.75 , where p is the optimizing progress linearly changing from 0 to 1. The learning rate α of inner loop optimization is set to 10−5. The parameters are updated for 20000 steps and the hardest val-subset is updated per 200 steps. The batchsize is set to 64. The running mean and running variance of Batch Normalization (BN) layers are fixed as the pre-trained values on ImageNet during training, which is discussed in (Du et al., 2020a). Due to memory limit, when implementing experiments based on ResNet50, we adopt the first order approximation (Finn et al., 2017) that stops the gradient of gtw in Eq. (4) for reducing memory and computational cost.
For the setting of SSDS, we conduct experiment based on ResNet18 on PACS. The implementation details are same as MSDS. For the setting of SSTD, we conduct experiment based on ResNet18 on CIFAR-10. The hyper-parameters of s andm are set to 8.0 and 0.2 respectively. Other implementation details are same as MSDS except that, in BN layers, the running mean and running variance are updated.
We implement experiments using Pytorch (Paszke et al., 2019) on a single NVIDIA Tesla P100 GPU.
C OPTIMIZATION ALGORITHM FOR FINDING THE HARDEST Sv
C.1 OPTIMIZATION ALGORITHM
To solve the problem of
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (11)
we design an alternative iteration algorithm in Sect. 3.2. Specifically, we initialize A with gradient of a sample randomly selected from S. Then we alternatively update Sv and A.
Given A, Sv is updated by solving
max Sv ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉
s.t. Sv ⊂ S, |Sv| = ξ, (12)
where the constraints are derived from the definition of Γξ. Equation (12) indicates that the optimal Sv consists of ξ samples that have the largest values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉. Thus we compute and rank the values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv .
Given Sv (St is then given), we update A to satisfy the constraint A = gtw in Eq. (11), then A is
A = gtw = 1 |St| ∑
(x,y)∈St
∇wl(fw(x), y). (13)
C.2 CONVERGENCE IN EXPERIMENTS
We show empirically the convergence of this alternative iteration algorithm in Fig. 4, with the values of objective function in Eq. (11). Figure 4 shows that the values of objective function converges after only a few iterations.
We also check if the splitting changes when the value of objective function converges. To do this, we count the ratio of changed sample indexes in Sv at each iteration, as in Table 11. Table 11 shows that the splitting is not changed when the value of objective function converges.
C.3 TOY EXAMPLE.
We present a toy example in this section to check if this algorithm can find the optimal solution. The toy example is an 2-dimensional classification problem, as shown in Fig. 5. Different colors indicate different classes. A fully-connected layer without bias is used as the network (learner). We split the data of the first class (blue points) with our algorithm. The candidate solutions with corresponding objective function values are given in Table 12.
The solutions in the iteration process of our algorithm are reported in Table 13. The solutions converge to (1,3,4), which is the optimal solution in Table 12. This indicates that our algorithm can find the
optimal splitting for the toy example. We also give the code of this toy example as below, and the reader may rerun it to verify the results.
Code the toy example:
import torch import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs
##### np.random.seed(1) data = make_blobs(n_samples=12,centers=2) x1 = data[0][data[1]==0] x2 = data[0][data[1]==1]
plt.plot(x1[:,0],x1[:,1],’*’) plt.plot(x2[:,0],x2[:,1],’*’) t = 0 for x in x1:
plt.text(x[0],x[1],str(t)) t+=1
plt.savefig(’samples.pdf’)
### model = torch.nn.Linear(2,2,bias=False).cuda() torch.nn.init.xavier_uniform_(model.weight) log = open(’log.txt’,’w’) log.write(’weight:’+ str(model.weight.cpu().data.numpy())+’\n’)
feat = torch.Tensor(data[0]).cuda() label = torch.LongTensor(data[1]).cuda() loss_fun = torch.nn.CrossEntropyLoss()
Grads = [] Loss = [] for i in range(len(feat)):
out = torch.nn.functional.linear(feat[i].view(1,-1),weight=model.weight,bias=None) loss_x = loss_fun(out,label[i].view(-1,))
grad = torch.autograd.grad(loss_x,model.weight)[0] Loss.append(loss_x.cpu().data.numpy()) Grads.append(grad.cpu().data.numpy())
##we split the data of the first class Loss = np.array(Loss)[data[1] == 0] Grads = np.array(Grads)[data[1] == 0] # # np.save(’toy_Grads.npy’,Grads) # np.save(’toy_Loss.npy’,Loss)
# Loss = np.load(’toy_Loss.npy’)[data[1] == 0] # Grads = np.load(’toy_Grads.npy’)[data[1] == 0] alpha = 0.001
def adv_loss(val_index=[0,1,2]): train_index = np.delete(np.arange(len(Loss)),val_index) loss = np.mean(Loss[val_index]) grad_val = np.mean(Grads[val_index],axis=0) grad_train = np.mean(Grads[train_index],axis=0)
return loss - alpha*np.sum(grad_val*grad_train)
#####brute force searching
Solutions = [] Values = []
#generate solutions for i in range(len(Loss)-2):
for j in range(i+1,len(Loss)-1): for k in range(j+1,len(Loss)):
Solutions.append([i,j,k]) for val_idex in Solutions:
Values.append(adv_loss(val_idex))
optimal_idx = np.array(Values).argmax() optimal_solution = Solutions[optimal_idx] optimal_value = max(Values)
print("all possible solutions:",Solutions) print("objective function values of possible solutions:",Values) print("optimal solution:",optimal_solution,"\nobjective function value
of optimal solution:",optimal_value)
log.write(str({"all possible solutions":Solutions, "objective function values of possible solutions":Values, "optimal solution":optimal_solution, "objective function value of optimal solution":optimal_value}))
###### our algorithm A = Grads[np.random.randint(len(Loss))] values_tmp = [] solutions_tmp = [] mtr_index =
np.random.choice(np.arange(len(Loss)),size=len(Loss)//2,replace=False) l = np.mean(Loss- alpha*np.sum(Grads*A.reshape((1,2,2)),axis=(1,2))) values_tmp.append(l) solutions_tmp.append(np.delete(np.arange(len(Loss)),mtr_index)) for i in range(5):
D = np.sum(Grads*A,axis=(1,2)) Loss_ = Loss-alpha*D idx_sort = np.argsort(Loss_)
mtr_index = idx_sort[:len(Loss) // 2] mte_index = idx_sort[len(Loss) // 2 :]
values_tmp.append(Loss_[mte_index].mean()) solutions_tmp.append(mte_index) A = np.mean(Grads[mtr_index],axis=0)
print("our optimal solution:",solutions_tmp[-1],"\nobjective function value of our optimal solution:",values_tmp[-1]) log.write(str({"our optimal solution":solutions_tmp[-1], "objective function value of our optimal solution":values_tmp[-1]})) print(solutions_tmp,values_tmp)
D PROOF OF THEOREM 1
We first introduce VC-dimension based generalization bound and domain adaptation theory in Appendix D.1, then present two lemmas in Appendix D.2, and finally give the proof of Theorem 1 in Appendix D.3.
D.1 PRELIMINARY
VC-dimension based generalization bound. Theorem A-1. (Abu-Mostafa et al., 2012) Let S be the set of training data i.i.d. sampled for distribution P . For any δ ∈ (0, 1), with probability at least 1− δ, we have ∀h (h : X → {0, 1}) in hypothesis spaceH,
| P(h)− ̂S(h)| ≤
√ 8
|S|
( V C(H) log 2e |S|
V C(H) +
4
δ
) . (14)
where P(h) = E(x,y)∼P [I{(h(x)) 6=y}] and ̂S(h) = 1|S| ∑ (x,y)∈S I{(h(x))6=y}.
Domain adaptation theory.
Theorem A-2. (Ben-David et al., 2007; 2010) For any h in hypothesis spaceH, we have
Q(h) ≤ P(h) + 1
2 dH(P,Q) + λ∗, (15)
where λ∗ ≥ infh′∈H{ P(h′) + Q(h′)} and
dH(P,Q) = 2 sup h∈H |EP [h = 1]− EQ[h = 1]| (16)
isH-divergence.
D.2 LEMMAS
Lemma A-1. For any Sv ∈ Γξ and St = S − Sv, ∀δ ∈ (0, 1), with probability at least 1 − δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| ≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) , (17)
where ΨP(f) = E(x,y)∼P [I{Ψ(f(x)) 6=y}] is generalization error on distribution P , ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} is empirical error, H Ψ St
= {Ψ ◦ f : f ∈ HSt}, V C(HΨSt) is the VC-dimension of HΨSt , and Ψ(·) is the prediction rule such as the Bayes Optimal Predictor, i.e., Ψ(f(x)) = I{f(x)≥ 12}.
Proof:
From the definition of HΨSt , for any f ∈ HSt , there exists a hf ∈ H Ψ St such that hf = Ψ ◦ f . Applying Theorem A-1, with probability at least 1− δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| =| P(hf )− ̂Sv (hf )|
≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(18)
Lemma A-2. For any Sv ∈ Γξ and St = S − Sv, let ΨP(g) = inff∈HSt Ψ P(f) and ̂ Ψ Sv (h) = inff∈HSt ̂ Ψ Sv (f), then ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g) ≥ ̂ΨSv (h)− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) . (19)
Proof:
From the definition of g and h, we have ̂ΨSv (g) ≥ ̂ Ψ Sv (h). ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g)− ̂ΨSv (h) = Ψ P(g)− ̂ΨSv (g) + ̂ Ψ Sv (g)− ̂ Ψ Sv (h)
≥ ΨP(g)− ̂ΨSv (g)
≥− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(20)
Thus, Eq. (19) holds.
D.3 PROOF OF THEOREM 1
Proof:
We denote byHΨl the hypothesis space such that ∀h ∈ HΨl ,
h(x) = Ψl(f(x)) = { 1 if l(f(x), y) > γ, 0 otherwise ,
(21)
for f ∈ H. Then
dHΨl (P,Q) = 2 sup h∈HΨl ∣∣EP [h = 1]− EQ[h = 1]∣∣ = 2 sup
f∈H ∣∣EP [Ψl(f(x)) = 1]− EQ[Ψl(f(x)) = 1]∣∣ = 2 sup
f∈H ∣∣∣EP [I{l(f(x),y)>γ}]− EQ[I{l(f(x),y)>γ}]∣∣∣ = 2 sup
f∈H
{ EQ[I{l(f(x),y)>γ}]− EP [I{l(f(x),y)>γ}] } ≤ 2 sup
f∈H EQ[I{l(f(x),y)>γ}]− 2 inf f∈H EP [I{l(f(x),y)>γ}].
(22)
In the fourth equation, we utilize the assumption that EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}]. Given any Sv ∈ Γξ and St = S − Sv , we replaceH byHSt , then
dHΨlSt (P,Q) ≤2 sup f∈HSt EQ[I{l(f(x),y)>γ}]− 2 inf f∈HSt EP [I{l(f(x),y)>γ}]
≤2C1 − 2 inf f∈HSt
EP [I{l(f(x),y)>γ}] (23)
where C1 = supS′v∈Γξ supf∈HS−S′v EQ[I{l(f(x),y)>γ}]. Applying Theorem A-2, for any f ∈ HSt , we have ΨlQ (f) ≤ Ψl P (f) + C1 − inf
f ′∈HSt EP [I{l(f ′(x),y)>γ}] + λ∗(Sv), (24)
where λ∗(Sv) ≥ inff ′∈HS−Sv { Ψl P (f ′) + ΨlQ (f ′)}. Let C3 = supS′v∈Γξ λ ∗(S′v) ≥ supS′v∈Γξ inff ′∈HS−S′v { ΨlP (f ′) + Ψl Q (f ′)}, we have
ΨlQ (f) ≤ Ψl P (f) + C1 − inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] + C3. (25)
Applying Lemma A-1 to the first term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have ∀f ∈ HSt ,
ΨlP (f) ≤ ̂ Ψl Sv (f) + √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) . (26)
Applying Lemma A-2 to the third term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have
inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] ≥ inf f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}
− √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) .
(27)
Combining Eq. (25), (26), (27) and thanks to the union bound, for any δ ∈ (0, 1), with probability at least 1− 2δ, we have ∀f ∈ HSt ,
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2 √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) + C3, (28)
where B(Sv) = C1 − inff ′∈HSt 1 |Sv| ∑ (x,y)∈Sv I{l(f ′(x),y)>γ}. Using the fact that |Sv| = ξ and let C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl
S−S′v ) , we have
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
|Sv|
( C2 + 4
δ
) + C3. (29) | 1. What is the focus of the paper regarding domain generalization?
2. What are the strengths of the proposed approach, particularly in its application of meta-learning?
3. What are the weaknesses of the paper, especially concerning the maximization problem and ablation study?
4. Do you have any questions or concerns about the writing style and presentation of the paper?
5. How does the reviewer assess the novelty and significance of the proposed idea compared to prior works? | Review | Review
P
a
p
e
r
s
u
m
m
a
r
y
This paper focuses on domain generalization, targeting the challenging scenario where the training set might not include different sources; even under the presence of different sources, the problem formulation does not takes into account domain labels. The proposed solution is based on meta-learning, following the path drawn by Li et al. AAAI 2018; the Authors propose to adversarially split the training set in meta-train and meta-validation sets, and then update the current model in a direction that fosters good generalization performance on the meta-test. Results on standard benchmarks are encouraging.
P
r
o
s
The proposed idea is rather interesting, enabling to apply meta-learning solutions also in absence of domain labels. In particular, I like the idea of finding meta-train and meta-test splits in an adversarial fashion. This is crucial, since randomly splitting the training set in meta-train and meta-validation would not be helpful, since it would lead to episodes where meta-train and meta-test are iid.
The Authors provide a theoretical interpretation of their approach (due to my background, I was not able to properly review it though)
C
o
n
s
My main concern is related to the way the maximization problem is tackled, in the objective in Eq. (5). I have reviewed Appendix C, but I cannot understand how convergence would require so few iterations. Even restricting the size of
S
v
as mentioned in Section 3., the number of possible sample combinations that generate couples (
S
v
,
S
t
) is huge -- if I understood the process correctly, then with
|
S
v
|
=
K
and
|
S
|
=
N
, the number of combinations is
(
N
K
)
-- and for each of them the gradients that lead to the meta-update are different. Could the authors comment on this? Am I missing something? Related to this point, I am also concerned by the ablation study in Table 5 - where it is shown that, while helpful, the adversarial strategy does not help very significantly in the whole picture.
The overall writing could be improved. Sentences like "this model can be further transfored to a minimax problem" in the Abstract are not properly exposed, and there are several examples throughout the manuscript. There is a misconception related to prior work: Carlucci et al. 2019 (as well as Volpi et al. 2018) also tackles the case where the training data only comprises a single source domain. This should be clarified in the Introduction/Related work.
R
e
v
i
e
w
s
u
m
m
a
r
y
I like the idea this paper starts from, and I like the proposed solution. I still do not properly understand how the maximization problem at the core of the method is approached, and I believe that the paper needs some exhaustive proof checking to improve the overall writing. I look forward reading the Author response and iterating the discussion.
---- Post-rebuttal comments----
I thank the Authors for their explanations. Yet, I still believe that this work is not ready for publication. Random splitting and adversarial splitting perform very comparably (1% is not a lot), in my opinion casting some doubts on how meaningful the solution found to the proposed optimization problem is. The Authors included a toy example in the Appendix, but this did not mitigate my concerns on the original manuscript's experiments. I still believe that the core idea is very interesting, and hope that it will be further investigated by the Authors for a subsequent submission. |
ICLR | Title
Domain-Free Adversarial Splitting for Domain Generalization
Abstract
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
N/A
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
1 INTRODUCTION
Deep learning approach has achieved great success in image recognition (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). However, deep learning methods mostly succeed in the case that the training and test data are sampled from the same distribution (i.e., the i.i.d. assumption). However, this assumption is often violated in real-world applications since the equipments/environments that generate data are often different in training and test datasets. When there exists distribution difference (domain shift (Torralba & Efros, 2011)) between training and test datasets, the performance of trained model, i.e., learner, will significantly degrade.
To tackle the domain shift issue, domain adaptation approach (Pan & Yang, 2010; Daume III & Marcu, 2006; Huang et al., 2007) learns a transferable learner from source domain to target domain. Domain adaptation methods align distributions of different domains either in feature space (Long et al., 2015; Ganin et al., 2016) or in raw pixel space (Hoffman et al., 2018), which relies on unlabeled data from target domain at training time. However, in many applications, it is unrealistic to access the unlabeled target data, therefore this prevents us to use domain adaptation approach in this setting, and motivates the research on the learning problem of domain generalization.
Domain generalization (DG) approach (Blanchard et al., 2011; Muandet et al., 2013) commonly uses several source domains to train a learner that can generalize to an unseen target domain. The underlying assumption is that there exists a latent domain invariant feature space across source domains and unseen target domain. To learn the domain invariant features, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b) explicitly align distributions of different source domains in feature space. (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019) split source domains into meta-train and meta-test to simulate domain shift and train learner in a meta-learning approach. (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Ryu et al., 2020) augment images or features to train learner to enhance generalization capability.
Conventional domain generalization methods assume that the domain labels are available. But in a more realistic scenario, the domain labels may be unknown (Wang et al., 2019). To handle this domain-free setting, Carlucci et al. (2019) combines supervised learning and self-supervised learning to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and trains a domain invariant feature extractor via adversarial training. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Another line of works (Volpi et al., 2018; Qiao et al., 2020) tackle the single source setting that the training set comprises a single domain, and the train and test data are from different domains.
In this work, we focus on a general learning scenario of domain generalization as follows. First, we do not know the domain label of each data and do not assume that there are several domains in the training dataset. Second, we do not assume that the training and test data are from different domains (e.g., styles). However, the previous domain-free DG methods (Matsuura & Harada, 2020) commonly evaluate on the datasets (e.g., PACS) composed of several domains though they do not use domain labels in training.
In our domain-free setting, we do not assume and know the domains in the training dataset, we therefore model domain generalization as a learning problem that the learner should be able to generalize well for any splitting of train/val subsets, i.e., synthetic source/target domains, over the training dataset. This explicitly enforces that the trained learner should be generalizable for any possible domain shifts within the training dataset.
To achieve this goal, we propose an adversarial splitting model that is a min-max optimization problem, due to the difficulty of enumerating all splittings. In this min-max problem, we adversarially split training dataset to train/val subsets by maximizing the domain shift between them based on the given learner, and then update learner by minimizing the prediction error on val-subset using meta-learning approach given the splitting. By optimizing this min-max problem, we enforce the learner to generalize well even in the worst-case splitting. We also investigate L2-normalization of features in our domain generalization method. It is surprisingly found that L2-normalization can improve performance of learner and mitigate gradient explosion in the meta-learning process of DG. We further theorectically analyze the underlying reasons for this finding. This proposed domain generalization approach is dubbed Domain-Free Adversarial Splitting, i.e., DFAS.
To verify the effectiveness of our method, we conduct extensive experiments on benchmark datasets of PACS, Office-Home and CIFAR-10 under different settings with multiple/single source domains. In experiments that the training data are from several source domains, our method achieves state-ofthe-art results on both PACS and Office-Home datasets. We also find that our method significantly outperforms baselines in experiments that the training data are from a single source domain on PACS and CIFAR-10. We also confirm the effectiveness of our method by ablation study.
Based on domain adaptation theory, we also derive an upper bound of the generalization error on unseen target domain. We analyze that the terms in this upper bound are implicitly minimized by our method. This theoretical analysis partially explains the success of our method.
2 RELATED WORKS
We summarize and compare with related domain generalization (DG) methods in two perspectives, i.e., DG with domain labels and DG without domain labels.
DG with domain labels. When the domain labels are available, there are three categories of methods for DG. First, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b; Piratla et al., 2020) learn domain invariant features by aligning feature distributions or by common/specific feature decomposition. Second, (Li et al., 2019a; Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019; Du et al., 2020a;b) are based on meta-learning approach that splits given source domains into meta-train and meta-test domains and trains learner in an episodic training paradigm. Third, (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Wang et al., 2020) augment fake domain data to train learner for enhancing generalization capability of learner. Our method may mostly relate to the above second category of methods. But differently, we consider the DG problem in domain-free setting and adversarially split training dataset to synthesize domain shift in a principled min-max optimization method, instead of using leave-one-domain-out splitting in these methods.
DG without domain labels. When the domain label is unavailable, to enhance generalization ability of learner, Wang et al. (2019) extracts robust feature representation by projecting out superficial patterns like color and texture. Carlucci et al. (2019) proposes to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and learns domain invariant features via adversarial training of feature extractor and domain discriminator. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Volpi et al. (2018) and Qiao et al. (2020) propose adversarial data augmentation to tackle the setting that the training set comprises a single domain. In methodology, these methods either explicitly force the learner to extract robust features (Wang et al., 2019; Matsuura & Harada, 2020; Huang et al., 2020) or augment new data to increase training data (Carlucci et al., 2019; Qiao et al., 2020; Volpi et al., 2018). While our method is a novel meta-learning approach for DG by introducing adversarial splitting of training dataset during training, without relying on data/domain augmentation.
3 METHOD
In our setting, since we do not assume and know the domains in the training dataset, the training data could be independently sampled from several underlying source domains or just from a single source domain. We denote S = {(xi, yi)}Ni=1 as the training dataset. Our goal is to train the learner with S that can generalize well on an unseen target domain.
In the following sections, we introduce details of our proposed model in Sect. 3.1, followed by its optimization method in Sect. 3.2. We also investigate L2-normalization for domain generalization in Sect. 3.3. Theoretical analysis for our method is presented in Sect. 4. Experimental results are reported in Sect. 5. Sect. 6 concludes this paper.
3.1 DOMAIN-FREE ADVERSARIAL SPLITTING MODEL
As mentioned in Sect. 1, we model DG as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. The learner is trained using meta-learning approach (Finn et al., 2017). To formulate our idea mathematically, we first introduce some notations. We denote f as a function/learner (f could be a deep neural network, e.g., ResNet (He et al., 2016)) that outputs classification score of the input image, l as the loss such as cross-entropy, St and Sv as the train-subset and val-subset respectively such that S = St ∪ Sv and St ∩ Sv = ∅. The formulated optimization problem for domain generalization is
min w
1 |Γξ| ∑ Sv∈Γξ L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(1)
In Eq. (1), Γξ = {Sv : Sv ⊂ S, |Sv| = ξ} is the set of all possible val-subsets of S with length of ξ, St = S − Sv is train-subset paired with each Sv , L(θ(w);Sv) = 1|Sv| ∑ (x,y)∈Sv l(fθ(w)(x), y) is the loss on Sv, where θ(w) is the parameters of f , L(θ;St, w) is L(θ;St) with θ initialized by w andR(w) is regularization term. In the optimization model of Eq. (1), the parameter θ(w) of learner trained on St is treated as a function of the initial parameter w. To force the learner trained on St to generalize well on Sv, we directly minimize the loss L(θ(w);Sv), dubbed generalization loss, on val-subset Sv, w.r.t. the parameter θ(w) trained on St. Solving Eq. (1) will force the learner to be able to generalize well from any train-subset to corresponding val-subset.
Since |Γξ| may be extremely large, it is infeasible to enumerate all possible train/val splittings. Thus, we propose the following adversarial splitting model instead,
min w max Sv∈Γξ
L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(2)
In the min-max problem of Eq. (2), the train/val (St/Sv) splitting is optimized to maximize the generalization loss to increase the domain shift between train and val subsets by finding the hardest splitting to the learner. While w is optimized by minimizing the generalization loss of learner over
the splitting. Solving this adversarial splitting optimization model in Eq. (2) enforces the learner to be generalizable even for the worst-case splitting. We therefore expect that the trained learner is robust to the domain shifts within the training dataset. For the regularization termR(w), we set it to be the training loss on St (i.e.,R(w) = L(w;St)), which additionally constrains that the learner with parameter w should be effective on St (Li et al., 2018a). The effect of the hyper-parameter ξ will be discussed in Appendix A.2.
In conventional adversarial machine learning, adversarial training is imposed on adversarial samples and learner to increase robustness of the learner to adversarial corruption (Goodfellow et al., 2015). While in our optimization model of Eq. (2), adversarial training is conducted on data splitting and learner to force the learner to be robust to domain shift between train/val subsets. Our model bridges adversarial training and meta-learning. It is a general learning framework for domain generalization and is a complement to adversarial machine learning.
3.2 OPTIMIZATION
This section focuses on the optimization of Eq. (2). Since Eq. (2) is a min-max optimization problem, we alternately update Sv and w by fixing the other one as known. We should also consider the inner loop for optimization of θ(w) in the bi-layer optimization problem of Eq. (2). We next discuss these updating steps in details. The convergence and computational cost of this algorithm will be also discussed in Appendix A.3 and A.4 respectively.
Inner loop for optimization of θ(w). We adopt finite steps of gradient descent to approximate the minimizer θ(w) of the inner objective L(θ;St) with initial value w. This approximation technique has been introduced in machine learning community several years ago (Sun & Tappen, 2011; Finn et al., 2017; Fan et al., 2018). For convenience of computation, following (Li et al., 2018a; Dou et al., 2019), we only conduct gradient descent by one step as
θ(w) = w − α∇θL(θ;St)|θ=w, (3)
where α is the step size of inner optimization and its effect will be discussed in Appendix A.2.
Optimizingw with fixed Sv . For convenience, we denote gtw = ∇θL(θ;St)|θ=w. Fixing Sv (St is then fixed), w can also be updated by gradient descent, i.e.,
w = w − η∇w ( L(w − αgtw;Sv) +R(w) ) , (4)
where η is the step size of outer optimization.
Finding the hardest splitting Sv with fixed w. Fixing w, to find Sv ∈ Γξ to maximize L(w − αgtw;Sv), we do first order Taylor expansion for L(w−αgtw;Sv) by L(w−αgtw;Sv) ≈ L(w;Sv)− α 〈gtw, gvw〉, where gvw = ∇θL(θ;Sv)|θ=w and 〈·, ·〉 denotes the inner product. From the definition of L, gtw and gvw, the optimization problem of maxSv∈Γξ{L(w;Sv) − α 〈gtw, gvw〉} can be written as maxSv∈Γξ{ 1|Sv| ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), g t w〉}. This problem is equivalent to the following splitting formulation:
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (5)
where we introduced an auxiliary variable A. Eq. (5) can be solved by alternatively updating Sv and A. Given A, we compute and rank the values of l (fw(x), y) − α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv. Given Sv (St is then given), we update A by A = gtw = 1 |St| ∑ (x,y)∈St ∇wl(fw(x), y). We also discuss details and convergence of this alternative iteration in Appendix C. Since computing gradient w.r.t. all parameters is time and memory consuming, we only compute gradient w.r.t. parameters of the final layer of learner f .
3.3 L2-NORMALIZATION FOR EXTRACTED FEATURE
L2-normalization has been used in face recognition (Liu et al., 2017; Wang et al., 2018) and domain adaptation (Saito et al., 2019; Gu et al., 2020), but is rarely investigated in domain generalization. We investigate L2-normalization in domain generalization in this paper. It is found surprisingly in experiments that L2-normalization not only improves the performance of learner (see Sect. 5.4), but
also mitigates gradient explosion (see Sect. 5.5) that occurs frequently during the training of metalearning for DG (Finn et al., 2017; Dou et al., 2019). We next discuss details of L2-normalization in our method and analyze why L2-normalization mitigates gradient explosion.
Feature L2-normalization. The L2-normalization is used as a component of our learner f . Specifically, we decompose f into feature extractor fe (e.g., the convolutional layers of ResNet), the transform fn representing L2-normalization and classifier f c, i.e., f = f c ◦ fn ◦ fe. The feature of input image x extracted by fe is fed to fn to output an unit vector z, i.e., z = fn(fe(x)) = f
e(x) ‖fe(x)‖ .
The classifier f c consists of unit weight vectors W = [w1, w2, · · · , wK ], where K is the number of classes and ‖wk‖ = 1,∀k. f c takes z as an input and outputs the classification score vector σm,s(W T z). σm,s(·) is the marginal softmax function defined by
[σm,s(W T z)]k = exp(s(wTk z −mI{k=y}))∑K k′=1 exp(s(w T k′z −mI{k′=y})) , k = 1, 2, · · · ,K, (6)
where y is the label of x, [·]k indicates the k-th element, I{a} is the indicator function that returns 1 if a is true, 0 otherwise, m and s are hyper-parameters indicating margin and radius respectively.
Analysis of mitigating gradient explosion. We find that L2-normalization mitigates gradient explosion in the training of meta-learning for domain generalization. For the sake of simplicity, we analyze gradient norm of loss w.r.t. parameters of f c in the meta-learning process of domain generalization, with fe as fixed function. Without loss of generality, we consider the case that K = 2 (i.e., binary classification), s = 1 and m = 0. In this case, we have the following proposition. Proposition 1. Under the above setting, if the input feature of f c is L2-normalized, the gradient norm of loss w.r.t. parameters of f c in the meta-learning process of DG is bounded.
Sketch of proof. Given feature z, the loss of binary classification is L(w; z) = −y log(σ(wT z))− (1 − y) log(1 − σ(wT z)), where σ is the sigmoid function. Let w′ = w − α∇wL(w; z), then ∇wL(w′; z) = (I − αH)∇w′L(w′; z), where H is the Hessian matrix. The gradient norm ‖∇wL(w′; z)‖ ≤ ‖I − αH‖ ‖∇w′L(w′; z)‖ ≤ (1 + |α| ‖H‖) ‖∇w′L(w′; z)‖. Since ∇w′L(w′; z) = (p− y)z and H = p(1− p)zzT where p = σ(wT z), ‖H‖ = supu:‖u‖=1 ‖Hu‖ ≤ supu:‖u‖=1
∥∥zzTu∥∥ ≤ ‖z‖2 and ‖∇w′L(w′; z)‖ ≤ ‖z‖. If ‖z‖ = 1, we have ‖∇wL(w′; z)‖ ≤ 1 + |α|. According to Proposition 1, L2-normalization can mitigate gradient explosion under the above setting. The analysis of gradient norm of loss w.r.t. parameters of both f c and fe in the meta-learning process is much more complex, left for our future work.
4 THEORETICAL ANALYSIS
This section presents theoretical understanding of our method. We first derive a generalization error bound on target domain in theorem 1 for the general setting of meta-learning for DG. Then, based on theorem 1, we theoretically explain the reason on the success of our method.
Without loss of generality, we consider binary classification problem. We denoteH as the set of all possible f , i.e., H = {fw : w ∈ RM}, where M is the number of parameters. For any Sv ∈ Γξ and St = S − Sv, we let HSt = {fθ(w) : θ(w) = arg minθ L(θ;St, w), w ∈ RM}. The metalearning approach for DG is to find a function in HSt to minimize classification loss on Sv. Note that, although the training samples in S may be sampled from several distributions, they can still be seen as being i.i.d. sampled from a mixture of these distributions. We next respectively denote P = ∑D d=1 βdPd as the mixture distribution with βd representing the sampling ratio of the d-th source domain, ΨQ(f) = E(x,y)∼Q[I{Ψ(f(x))6=y}] as the generalization error on distribution Q of unseen target domain, ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} as the empirical error on Sv, V C(H) as the VC-dimension of H, and Ψ(·) as the prediction rule such as the Bayes Optimal Predictor. Based on the domain adaptation theory (Ben-David et al., 2007; 2010) and inspired by the analysis in (Saito et al., 2019), we have the following theorem. Theorem 1. Let γ be a constant, assume EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}], then given any Sv ∈ Γξ and St = S−Sv and for any δ ∈ (0, 1), with probability at least 1−2δ, we have ∀f ∈ HSt ,
Table 1: Results of MSDS experiment on PACS based on ResNet18 and ResNet50.
Backbone Target D-SAM JiGen MASF MMLD MetaReg RSC Base-line DFAS (ours)
ResNet18
A 77.3 79.4 80.3 81.3 83.7 83.4 80.2±0.4 84.2±0.1 C 72.4 75.3 77.2 77.2 77.2 80.3 75.5±0.5 79.5±0.3 P 95.3 96.0 95.0 96.1 95.5 96.0 95.9±0.1 95.8±0.1 S 77.8 71.4 71.7 72.3 70.3 80.9 70.1±0.9 82.1±0.4
Avg 80.7 80.5 81.0 81.8 81.7 85.1 80.4 85.4
ResNet50
A - - 82.9 - 87.2 87.9 86.1±0.2 89.1±0.1 C - - 80.5 - 79.2 82.2 79.2±0.4 84.6±0.2 P - - 95.0 - 97.6 97.9 97.6±0.1 96.8±0.2 S - - 72.3 - 70.3 83.5 70.3±0.7 85.6±0.3
Avg - - 82.7 - 83.6 87.8 83.3 89.0
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
ξ
( C2 + 4
δ
) + C3, (7)
where B(Sv) = C1 − inf
f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}, (8)
C1 = supS′v∈Γξ supf ′∈HS−S′v EQ[I{l(f ′(x),y)>γ}], C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl S−S′v ) , C3 ≥ supS′v∈Γξ inff ′∈HS−S′v { Ψl P (f ′) + ΨlQ (f ′)}, HΨlS−St = {Ψl ◦ f : f ∈ HS−St}, Ψl is a loss-related indicator defined by
Ψl(f(x)) = { 1 if l(f(x), y) > γ. 0 otherwise .
(9)
Proof is given in Appendix D. The assumption of EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}] in theorem 1 is realistic because the data of Q is not accessed at training time, and the learner trained on data of P should have smaller classification loss on P thanQ. In theorem 1, C1, C2, C3 are constants to f . In Eq. (7), the generalization error ΨlQ (f) on Q can be bounded by the empirical error ̂ Ψl Sv
(f) on Sv, the term B(Sv) that measures the discrepancy between P and Q, and the last two constant terms in Eq. (7).
To obtain lower ΨlQ (f), we need to minimize ̂ Ψl Sv (f) and B(Sv). Minimizing B(Sv) w.r.t. Sv is equivalent to
max Sv∈Γξ inf f∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f(x),y)>γ}. (10)
Intuitively, Eq. (10) means to find a Sv ∈ Γξ such that the infimum ratio of examples in Sv having loss greater than γ is maximized. This min-max problem of Eq. (10) for computing the error bound bears the similar idea as our min-max problem. Our adversarial splitting model in Eq. (2) can implicitly realize the goal of Eq. (10) and meanwhile ensure lower ̂ΨlSv (f) for any Sv .
The maximization in Eq. (10) corresponds to our adversarial splitting that finds the hardest val-subset Sv for the learner in Eq. (2). The infimum in Eq. (10) corresponds to the minimization of the loss in Eq. (2) on Sv w.r.t. the learner parameterized by θ(w). Instead of using indicator function I in Eq. (10), in our model of Eq. (2), we choose differentiable classification loss for easier optimization.
5 EXPERIMENTS
We verify the effectiveness of our method in three types of experimental settings: Multi Source with Domain Shift (MSDS) that the training data are from several source domains and there exists domain
shift between training and test data, Single Source with Domain Shift (SSDS) that the training data are from a single source domain and there exists domain shift between training and test data, and Same Source and Target Domain (SSTD) that the training and test data are from a same single domain. The source codes will be released online.
We conduct experiments on three benchmark datasets. PACS (Li et al., 2017) contains four domains, including art painting (A), cartoon (C), photo (P), sketch (S), sharing seven classes. OfficeHome (Volpi et al., 2018), a dataset widely used in domain adaptation and recently utilized in domain generalization, consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), Real World (Rw), sharing 65 classes. Both of these two datasets are utilized to conduct experiments in settings of MSDS and SSDS. CIFAR-10 (Krizhevsky et al., 2009) is taken for the experimental setting of SSTD.
5.1 TYPE I: MULTI SOURCE WITH DOMAIN SHIFT (MSDS)
In the setting of MSDS, following (Carlucci et al., 2019), we use leave-one-domain-out crossvalidation, i.e., training on three domains and testing on the remaining unseen domain, on PACS and Office-Home. Note that the domain labels are not used during training. We adopt ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. Full implementation details are reported in Appendix B.
We compare our method with several state-of-the-art methods, including D-SAM (D’Innocente & Caputo, 2018), JiGen (Carlucci et al., 2019), MASF (Dou et al., 2019), MMLD (Matsuura & Harada, 2020), MetaReg (Balaji et al., 2018), RSC (Huang et al., 2020). The results on PACS and Office-Home are reported in Table 1 and Table 2 respectively. Our DFAS achieves state-of-the-art results based on both ResNet18 and ResNet50 on both PACS (85.4%, 89.0%) and Office-Home (64.3%, 70.3%), outperforming RSC by 0.3% and 1.2% on PACS based on ResNet18 and ResNet50 respectively, and by 1.2% on Office-Home based on ResNet18 (these methods do not conduct experiment on Office-Home using ResNet50). Compared with Baseline that directly aggregates three source domains to train learner with standard fully-connected layer as classifier f c, our method of DFAS improves its performance by 5.0% and 5.7% on PACS based on ResNet18 and ResNet50 respectively, and by 2.1% and 2.1% on Office-Home based on ResNet18 and ResNet50 respectively. In Table 1, on PACS, DFAS significantly outperforms Baseline in almost all tasks except when P is taken as target domain. Note that, in the task that domain S, of which the style is extremely different from rest three domains, is target domain, our DFAS boosts the accuracy of Baseline by 12.0% and 15.3% based on ResNet18 and ResNet50 respectively. This indicates that our method can generalize well when domain shift is large. Office-Home is challenging for DG since the number of classes is larger than other datasets. As shown in Table 2, our DFAS outperforms Baseline stably in almost all tasks on Office-Home. These performance improvements demonstrate the effectiveness of our method in the case that training data are from multi-source domains and the unseen target domain is different from source domains.
5.2 TYPE II: SINGLE SOURCE WITH DOMAIN SHIFT (SSDS)
We conduct this type of experiment on PACS based on ResNet18. In this experiment, we train learner on one domain and test on each of the rest three domains, resulting in total 12 tasks. Implementation details are shown in Appendix B. Our method is compared with related methods, including Baseline that directly trains learner with standard fully-connected layer as classifier f c on the source domain, Jien (Carlucci et al., 2019) and SagNet (Nam et al., 2019). Results are reported in Table 3. Our method of DFAS outperforms Baseline and SagNet by 4.2% and 4.7% respectively. We observe that our method outperforms Baseline in 10 tasks among all 12 tasks. The performance boosts are large in tasks when domain S is set to be source domain. These performance improvements demonstrate the effectiveness of our method in the case that training data are from single source domain and the unseen target domain is different from source domain.
5.3 TYPE III: SAME SOURCE AND TARGET DOMAIN (SSTD)
We also apply our DG method to the common recognition task that the training and test data are from a same domain, i.e., SSTD, on CIFAR-10 dataset. To investigate the effect of training size, we sample different sizes of training data from the provided training set (i.e., source domain). Implementation details are in Appendix B. As shown in Table 4, our DFAS outperforms Baseline, JiGen and MMLD by 2.4%, 3.6% and 4.3% respectively on average. The results of JiGen and MMLD are obtained by running their codes on CIFAR-10. We observe that DFAS outperforms Baseline and compared methods in all different numbers of training data. In general, the performance boost is larger when the number of training data is smaller. This may be because the learner is more possible to be overfitting when the training size is smaller and our DFAS is designed to extract better generalizable features.
5.4 ABLATION STUDY
To further verify the effectiveness of each component of our method, we conduct additional ablation experiments on PACS dataset based on ResNet18 in both MSDS and SSDS setting. The results are reported in Table 5 and Table 6.
In Table 5 and 6, L2-norm denotes feature L2-normalization defined in Sect.3.3. Rand-split means the random splitting strategy that we randomly split the train/val subsets at each step of updating the parameters of learner. Adv-split denotes the adversarial splitting model that we update the worst-case splitting by solving the maximization problem in Eq. (5) per epoch in training process.
Effectiveness of L2-normalization. In Table 5, L2-norm (82.7%) outperforms Baseline (80.4%) by 2.3% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms Adv-split (83.6%) by 1.8% in MSDS setting. In Table 6, L2-norm (63.1%) outperforms Baseline (62.4%) by 0.7% and L2norm + Adv-split (i.e., DFAS) (66.6%) outperforms Adv-split (65.1%) by 1.5% in SSDS setting.
These performance improvements demonstrate that the feature L2-normalization is useful in domain generalization.
Effectiveness of adversarial splitting model. In Table 5, Adv-split (83.6%) outperforms Baseline (80.4%) by 3.2% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms L2-norm (82.7%) by 2.7% in MSDS setting. In Table 6, Adv-split (65.1%) outperforms Baseline (62.4%) by 2.7% and L2-norm + Adv-split (i.e., DFAS) (66.6%) outperforms L2-norm (63.1%) by 3.5% in SSDS setting. These results indicating that our proposed adversarial splitting model is effective.
Effectiveness of adversarial splitting over random splitting. In Table 5, L2-norm + Adv-split (85.4%) outperforms L2-norm + Rand-split (84.3%) by 1.1% and Adv-split (83.6%) outperforms Rand-split (82.8%) by 0.8% in MSDS setting. In Table 5, L2-norm + Adv-split (66.6%) outperforms L2-norm + Rand-split (65.4%) by 1.2% and Adv-split (65.1%) outperforms Rand-split (64.1%) by 1.0% in SSDS setting. These results demonstrate that the adversarial splitting model outperforms the random splitting strategy in different experimental settings.
Due to space limit, we add more ablation experiments in Appendix A.1 to further compare different splittings, including adversarial splitting, domain-label-based splitting and random splitting.
5.5 MITIGATING GRADIENT EXPLOSION BY L2-NORMALIZATION.
To show that L2-normalization can mitigate gradient explosion, we conduct the same experiments independently for 50 times respectively with L2-normalization and without L2-normalization. Then we count the numbers of occurrences of gradient explosion that are reported in Table 7. From Table 7, we can observe that L2-normalization can mitigate gradient explosion.
6 CONCLUSION
In this paper, we unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework to tackle the general domain generalization problem. Extensive experiments show the effectiveness of the proposed method. We are interested in deeper theoretical understanding and more applications of our method in the future work.
A ANALYSIS
A.1 COMPARISON OF ADVERSARIAL SPLITTING AND DOMAIN-LABEL-BASED SPLITTING
In the section, we compare our adversarial splitting with domain-label-based splitting that is commonly used in meta-learning-based DG methods. Due to variations of style, poses, sub-classes, etc., the internal inconsistency within dataset is complicated. Domain-label partially capture the inconsistency, while cannot cover all possible internal inconsistency. Our adversarial splitting method does not rely on the domain label. It iteratively finds the hardest train/val splitting to the learner to maximize the inconsistency and train the learner to generalize well for the hardest splitting, in an adversarial training way. This strategy more flexibly investigates the possible inconsistency within training dataset, adaptively to the learner, and can potentially enhance the generalization ability of learner.
We first empirically show that the domain-label-based splitting (denoted as Label-split) is not as hard as our adversarial splitting (Adv-split) to the learner in Table 8. In Table 8, we report the values of objective function in Eq. (5) of Adv-split and Label-split by fixing the learner with different network parameters wi at different epoch (1th, 2th, 5th and 10th) in the training process. Larger value in the table indicates that the splitting is harder to the learner (i.e., network). It can be observed that the domain-label-based splitting (Label-split) is not as hard as Adv-split to learner.
We also conduct experiments on PACS in MSDS setting to fairly compare different splittings, including adversarial splitting (Adv-split), domain-label-based splitting (Label-split) and random splitting (Rand-split). The results are reported in Table 9. Table 9 shows that adversarial splitting outperforms random splitting and domain-label-based splitting when training data is from multiple domains.
When the training data are from only a single domain, our adversarial splitting also performs well (as in Table 3). However, domain-label-based splitting cannot be used in this setting, since there is no domain label available.
A.2 EFFECT OF HYPER-PARAMETERS
Effect of hyper-parameter ξ. In Fig. 1(a), we show the performance of our method when varying the hyper-parameter ξ, i.e., length of the val-subset Sv in adversarial splitting of training dataset. The best result is obtained when ξ = |S|2 , and the results are similar when ξ |S| ranges from 0.3 to 0.7.
Effect of hyper-parameter α. We evaluate the effect of α in MSDS setting on PACS dataset in Fig. 1(b). From Fig. 1(b), the ACC is stable to the values of α in large range of 1e-6 to 1e-4. Small α results in small step-size for parameter updating in meta-learning framework, and limits the benefits from meta-learning and adversarial splitting. Larger α results in larger step-size for gradient descent based network updating, which may fail to decrease the training loss from the optimization perspective.
Effect of hyper-parameter m. The effect of m is evaluated in MSDS setting on PACS dataset in Fig. 1(c). Fig. 1(c) shows that the result is not sensitive to the value of m.
A.3 CONVERGENCE
We testify the convergence of DFAS with errors and losses in different tasks in Fig. 2. In Fig. 2(a) and Fig. 2(b) , we show the classification error curves on target domains (A and Ar respectively) in the setting of MSDS. In Fig. 2(c), we show the training loss of DFAS in task A in MSDS setting. These training curves indicates that DFAS converges in the training process. We also observe that DFAS has better stability than Baseline, in Fig. 2(a) and Fig. 2(b).
A.4 COMPUTATIONAL COST OF ADVERSARIAL SPLITTING AND RANDOM SPLITTING
We compare the computational cost of adversarial splitting and random splitting in this section. Since we only update the worst-case splitting per epoch, instead of at each step of updating parameters, the computational cost is only slightly higher than that of random splitting. To show this, we compare the total training times of the adversarial splitting and random spitting in the same number of steps (20000), as in Table 10.
From Table 10, the training time of Adv-split is only 5.6% (0.33/5.90) higher than Rand-split.
A.5 VISUALIZATION OF FEATURE SPACE
We visualize the feature space learned by our method of DFAS and Baseline (shown in Fig. 3), by t-SNE (Maaten & Hinton, 2008). It appears that DFAS yields better separation of classes and better alignment of distributions of source and unseen target domains, which possibly explains the accuracy improvements achieved by our DFAS.
B IMPLEMENTATION DETAILS
For the setting of MSDS, we use ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. The dimension of the bottleneck layer is set to be 512 when the backbone is ResNet18 as in (Saito et al., 2019), and 256 when the backbone is ResNet50 as in (Gu et al., 2020). Following (Gu et al., 2020), s is set to 7.5 for PACS and 10.0 for Office-Home. m is set to 0.2 for PACS and 0.1 for Office-Home. ξ is set to |S|2 . SGD with momentum of 0.9 is utilized to update parameters of learner. The learning rate of classifier and bottleneck layer is 10 times of convolutional layers, which is widely adopted in domain adaptation (Long et al., 2015; Ganin et al., 2016). Following (Ganin et al., 2016), the learning rate of convolutional layer is adjusted by η = 0.001(1+10p)0.75 , where p is the optimizing progress linearly changing from 0 to 1. The learning rate α of inner loop optimization is set to 10−5. The parameters are updated for 20000 steps and the hardest val-subset is updated per 200 steps. The batchsize is set to 64. The running mean and running variance of Batch Normalization (BN) layers are fixed as the pre-trained values on ImageNet during training, which is discussed in (Du et al., 2020a). Due to memory limit, when implementing experiments based on ResNet50, we adopt the first order approximation (Finn et al., 2017) that stops the gradient of gtw in Eq. (4) for reducing memory and computational cost.
For the setting of SSDS, we conduct experiment based on ResNet18 on PACS. The implementation details are same as MSDS. For the setting of SSTD, we conduct experiment based on ResNet18 on CIFAR-10. The hyper-parameters of s andm are set to 8.0 and 0.2 respectively. Other implementation details are same as MSDS except that, in BN layers, the running mean and running variance are updated.
We implement experiments using Pytorch (Paszke et al., 2019) on a single NVIDIA Tesla P100 GPU.
C OPTIMIZATION ALGORITHM FOR FINDING THE HARDEST Sv
C.1 OPTIMIZATION ALGORITHM
To solve the problem of
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (11)
we design an alternative iteration algorithm in Sect. 3.2. Specifically, we initialize A with gradient of a sample randomly selected from S. Then we alternatively update Sv and A.
Given A, Sv is updated by solving
max Sv ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉
s.t. Sv ⊂ S, |Sv| = ξ, (12)
where the constraints are derived from the definition of Γξ. Equation (12) indicates that the optimal Sv consists of ξ samples that have the largest values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉. Thus we compute and rank the values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv .
Given Sv (St is then given), we update A to satisfy the constraint A = gtw in Eq. (11), then A is
A = gtw = 1 |St| ∑
(x,y)∈St
∇wl(fw(x), y). (13)
C.2 CONVERGENCE IN EXPERIMENTS
We show empirically the convergence of this alternative iteration algorithm in Fig. 4, with the values of objective function in Eq. (11). Figure 4 shows that the values of objective function converges after only a few iterations.
We also check if the splitting changes when the value of objective function converges. To do this, we count the ratio of changed sample indexes in Sv at each iteration, as in Table 11. Table 11 shows that the splitting is not changed when the value of objective function converges.
C.3 TOY EXAMPLE.
We present a toy example in this section to check if this algorithm can find the optimal solution. The toy example is an 2-dimensional classification problem, as shown in Fig. 5. Different colors indicate different classes. A fully-connected layer without bias is used as the network (learner). We split the data of the first class (blue points) with our algorithm. The candidate solutions with corresponding objective function values are given in Table 12.
The solutions in the iteration process of our algorithm are reported in Table 13. The solutions converge to (1,3,4), which is the optimal solution in Table 12. This indicates that our algorithm can find the
optimal splitting for the toy example. We also give the code of this toy example as below, and the reader may rerun it to verify the results.
Code the toy example:
import torch import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs
##### np.random.seed(1) data = make_blobs(n_samples=12,centers=2) x1 = data[0][data[1]==0] x2 = data[0][data[1]==1]
plt.plot(x1[:,0],x1[:,1],’*’) plt.plot(x2[:,0],x2[:,1],’*’) t = 0 for x in x1:
plt.text(x[0],x[1],str(t)) t+=1
plt.savefig(’samples.pdf’)
### model = torch.nn.Linear(2,2,bias=False).cuda() torch.nn.init.xavier_uniform_(model.weight) log = open(’log.txt’,’w’) log.write(’weight:’+ str(model.weight.cpu().data.numpy())+’\n’)
feat = torch.Tensor(data[0]).cuda() label = torch.LongTensor(data[1]).cuda() loss_fun = torch.nn.CrossEntropyLoss()
Grads = [] Loss = [] for i in range(len(feat)):
out = torch.nn.functional.linear(feat[i].view(1,-1),weight=model.weight,bias=None) loss_x = loss_fun(out,label[i].view(-1,))
grad = torch.autograd.grad(loss_x,model.weight)[0] Loss.append(loss_x.cpu().data.numpy()) Grads.append(grad.cpu().data.numpy())
##we split the data of the first class Loss = np.array(Loss)[data[1] == 0] Grads = np.array(Grads)[data[1] == 0] # # np.save(’toy_Grads.npy’,Grads) # np.save(’toy_Loss.npy’,Loss)
# Loss = np.load(’toy_Loss.npy’)[data[1] == 0] # Grads = np.load(’toy_Grads.npy’)[data[1] == 0] alpha = 0.001
def adv_loss(val_index=[0,1,2]): train_index = np.delete(np.arange(len(Loss)),val_index) loss = np.mean(Loss[val_index]) grad_val = np.mean(Grads[val_index],axis=0) grad_train = np.mean(Grads[train_index],axis=0)
return loss - alpha*np.sum(grad_val*grad_train)
#####brute force searching
Solutions = [] Values = []
#generate solutions for i in range(len(Loss)-2):
for j in range(i+1,len(Loss)-1): for k in range(j+1,len(Loss)):
Solutions.append([i,j,k]) for val_idex in Solutions:
Values.append(adv_loss(val_idex))
optimal_idx = np.array(Values).argmax() optimal_solution = Solutions[optimal_idx] optimal_value = max(Values)
print("all possible solutions:",Solutions) print("objective function values of possible solutions:",Values) print("optimal solution:",optimal_solution,"\nobjective function value
of optimal solution:",optimal_value)
log.write(str({"all possible solutions":Solutions, "objective function values of possible solutions":Values, "optimal solution":optimal_solution, "objective function value of optimal solution":optimal_value}))
###### our algorithm A = Grads[np.random.randint(len(Loss))] values_tmp = [] solutions_tmp = [] mtr_index =
np.random.choice(np.arange(len(Loss)),size=len(Loss)//2,replace=False) l = np.mean(Loss- alpha*np.sum(Grads*A.reshape((1,2,2)),axis=(1,2))) values_tmp.append(l) solutions_tmp.append(np.delete(np.arange(len(Loss)),mtr_index)) for i in range(5):
D = np.sum(Grads*A,axis=(1,2)) Loss_ = Loss-alpha*D idx_sort = np.argsort(Loss_)
mtr_index = idx_sort[:len(Loss) // 2] mte_index = idx_sort[len(Loss) // 2 :]
values_tmp.append(Loss_[mte_index].mean()) solutions_tmp.append(mte_index) A = np.mean(Grads[mtr_index],axis=0)
print("our optimal solution:",solutions_tmp[-1],"\nobjective function value of our optimal solution:",values_tmp[-1]) log.write(str({"our optimal solution":solutions_tmp[-1], "objective function value of our optimal solution":values_tmp[-1]})) print(solutions_tmp,values_tmp)
D PROOF OF THEOREM 1
We first introduce VC-dimension based generalization bound and domain adaptation theory in Appendix D.1, then present two lemmas in Appendix D.2, and finally give the proof of Theorem 1 in Appendix D.3.
D.1 PRELIMINARY
VC-dimension based generalization bound. Theorem A-1. (Abu-Mostafa et al., 2012) Let S be the set of training data i.i.d. sampled for distribution P . For any δ ∈ (0, 1), with probability at least 1− δ, we have ∀h (h : X → {0, 1}) in hypothesis spaceH,
| P(h)− ̂S(h)| ≤
√ 8
|S|
( V C(H) log 2e |S|
V C(H) +
4
δ
) . (14)
where P(h) = E(x,y)∼P [I{(h(x)) 6=y}] and ̂S(h) = 1|S| ∑ (x,y)∈S I{(h(x))6=y}.
Domain adaptation theory.
Theorem A-2. (Ben-David et al., 2007; 2010) For any h in hypothesis spaceH, we have
Q(h) ≤ P(h) + 1
2 dH(P,Q) + λ∗, (15)
where λ∗ ≥ infh′∈H{ P(h′) + Q(h′)} and
dH(P,Q) = 2 sup h∈H |EP [h = 1]− EQ[h = 1]| (16)
isH-divergence.
D.2 LEMMAS
Lemma A-1. For any Sv ∈ Γξ and St = S − Sv, ∀δ ∈ (0, 1), with probability at least 1 − δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| ≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) , (17)
where ΨP(f) = E(x,y)∼P [I{Ψ(f(x)) 6=y}] is generalization error on distribution P , ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} is empirical error, H Ψ St
= {Ψ ◦ f : f ∈ HSt}, V C(HΨSt) is the VC-dimension of HΨSt , and Ψ(·) is the prediction rule such as the Bayes Optimal Predictor, i.e., Ψ(f(x)) = I{f(x)≥ 12}.
Proof:
From the definition of HΨSt , for any f ∈ HSt , there exists a hf ∈ H Ψ St such that hf = Ψ ◦ f . Applying Theorem A-1, with probability at least 1− δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| =| P(hf )− ̂Sv (hf )|
≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(18)
Lemma A-2. For any Sv ∈ Γξ and St = S − Sv, let ΨP(g) = inff∈HSt Ψ P(f) and ̂ Ψ Sv (h) = inff∈HSt ̂ Ψ Sv (f), then ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g) ≥ ̂ΨSv (h)− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) . (19)
Proof:
From the definition of g and h, we have ̂ΨSv (g) ≥ ̂ Ψ Sv (h). ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g)− ̂ΨSv (h) = Ψ P(g)− ̂ΨSv (g) + ̂ Ψ Sv (g)− ̂ Ψ Sv (h)
≥ ΨP(g)− ̂ΨSv (g)
≥− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(20)
Thus, Eq. (19) holds.
D.3 PROOF OF THEOREM 1
Proof:
We denote byHΨl the hypothesis space such that ∀h ∈ HΨl ,
h(x) = Ψl(f(x)) = { 1 if l(f(x), y) > γ, 0 otherwise ,
(21)
for f ∈ H. Then
dHΨl (P,Q) = 2 sup h∈HΨl ∣∣EP [h = 1]− EQ[h = 1]∣∣ = 2 sup
f∈H ∣∣EP [Ψl(f(x)) = 1]− EQ[Ψl(f(x)) = 1]∣∣ = 2 sup
f∈H ∣∣∣EP [I{l(f(x),y)>γ}]− EQ[I{l(f(x),y)>γ}]∣∣∣ = 2 sup
f∈H
{ EQ[I{l(f(x),y)>γ}]− EP [I{l(f(x),y)>γ}] } ≤ 2 sup
f∈H EQ[I{l(f(x),y)>γ}]− 2 inf f∈H EP [I{l(f(x),y)>γ}].
(22)
In the fourth equation, we utilize the assumption that EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}]. Given any Sv ∈ Γξ and St = S − Sv , we replaceH byHSt , then
dHΨlSt (P,Q) ≤2 sup f∈HSt EQ[I{l(f(x),y)>γ}]− 2 inf f∈HSt EP [I{l(f(x),y)>γ}]
≤2C1 − 2 inf f∈HSt
EP [I{l(f(x),y)>γ}] (23)
where C1 = supS′v∈Γξ supf∈HS−S′v EQ[I{l(f(x),y)>γ}]. Applying Theorem A-2, for any f ∈ HSt , we have ΨlQ (f) ≤ Ψl P (f) + C1 − inf
f ′∈HSt EP [I{l(f ′(x),y)>γ}] + λ∗(Sv), (24)
where λ∗(Sv) ≥ inff ′∈HS−Sv { Ψl P (f ′) + ΨlQ (f ′)}. Let C3 = supS′v∈Γξ λ ∗(S′v) ≥ supS′v∈Γξ inff ′∈HS−S′v { ΨlP (f ′) + Ψl Q (f ′)}, we have
ΨlQ (f) ≤ Ψl P (f) + C1 − inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] + C3. (25)
Applying Lemma A-1 to the first term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have ∀f ∈ HSt ,
ΨlP (f) ≤ ̂ Ψl Sv (f) + √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) . (26)
Applying Lemma A-2 to the third term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have
inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] ≥ inf f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}
− √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) .
(27)
Combining Eq. (25), (26), (27) and thanks to the union bound, for any δ ∈ (0, 1), with probability at least 1− 2δ, we have ∀f ∈ HSt ,
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2 √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) + C3, (28)
where B(Sv) = C1 − inff ′∈HSt 1 |Sv| ∑ (x,y)∈Sv I{l(f ′(x),y)>γ}. Using the fact that |Sv| = ξ and let C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl
S−S′v ) , we have
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
|Sv|
( C2 + 4
δ
) + C3. (29) | 1. What is the novel approach proposed by the paper in combining meta-learning and adversarial training?
2. How does the proposed method ignore domain labels to enhance the learned model's generalization ability towards domain shift problems?
3. What is the contribution of the paper regarding L2 normalization in mitigating gradient explosion in the MAML algorithm?
4. Are there any concerns or suggestions for improvement regarding the paper's explanation of the training process or ablation study? | Review | Review
The paper provides a novel way to combine meta-learning and adversarial training for domain generalisation. Different from existing methods, the authors propose to split the training dataset into train/val subsets in an iteratively adversarial way, regardless of domain labels, by which the model can be trained to learn to generalise well from training subset to val subset via meta-learning in each iteration.
Reason for score:
Overall, I vote for accepting. It’s ingenious that the paper proposes to ignore domain labels to enhance the learned model’s generalization ability towards domain shift problems. The major concern of mine is that the paper provides limited explanation of how the meta-learning algorithm, Model-Agnostic Meta-Learning (MAML), is utilized in the training process(see detailed in cons).
The paper proposes to discard domain labels when training a model to generalise well, by which the original training set containing several domains can be seen as a large labelless domain and then splitted into training/val subsets to simulate domain shift, hence MAML can be used here for the training process.
The authors incorporate the min-max optimization method following adversarial training to let the model be more effective in learning domain shifts between training/val subsets.
One of the paper’s contributions is that it surprisingly find that L2 -normalization can help mitigate gradient explosion in the MAML algorithm.
The paper provides comprehensive experiments of different domain shift settings as well as theoretical proofs to evaluate the proposed method, which are quite solid and convincing.
Cons:
It would be better if the paper provides more details in ablation study, for example, it mentions that DFAS-3 finds the hardest val-subset only based on loss by setting \alpha = 0 in Eqn. (5), there can be more analysis about the hyper-parameter of α . |
ICLR | Title
Domain-Free Adversarial Splitting for Domain Generalization
Abstract
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
N/A
Domain generalization is an approach that utilizes several source domains to train the learner to be generalizable to unseen target domain to tackle domain shift issue. It has drawn much attention in machine learning community. This paper aims to learn to generalize well to unseen target domain without relying on the knowledge of the number of source domains and domain labels. We unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework. In this framework, we model the domain generalization as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. To achieve this goal, we propose a min-max optimization problem which can be solved by an iterative adversarial training process. In each iteration, it adversarially splits the training dataset into train/val subsets to maximize domain shift between them using current learner, and then updates the learner on this splitting to be able to generalize well from train-subset to val-subset using meta-learning approach. Extensive experiments on three benchmark datasets under three different settings on the source and target domains show that our method achieves state-of-the-art results and confirm the effectiveness of our method by ablation study. We also derive a generalization error bound for theoretical understanding of our method.
1 INTRODUCTION
Deep learning approach has achieved great success in image recognition (He et al., 2016; Krizhevsky et al., 2012; Simonyan & Zisserman, 2014). However, deep learning methods mostly succeed in the case that the training and test data are sampled from the same distribution (i.e., the i.i.d. assumption). However, this assumption is often violated in real-world applications since the equipments/environments that generate data are often different in training and test datasets. When there exists distribution difference (domain shift (Torralba & Efros, 2011)) between training and test datasets, the performance of trained model, i.e., learner, will significantly degrade.
To tackle the domain shift issue, domain adaptation approach (Pan & Yang, 2010; Daume III & Marcu, 2006; Huang et al., 2007) learns a transferable learner from source domain to target domain. Domain adaptation methods align distributions of different domains either in feature space (Long et al., 2015; Ganin et al., 2016) or in raw pixel space (Hoffman et al., 2018), which relies on unlabeled data from target domain at training time. However, in many applications, it is unrealistic to access the unlabeled target data, therefore this prevents us to use domain adaptation approach in this setting, and motivates the research on the learning problem of domain generalization.
Domain generalization (DG) approach (Blanchard et al., 2011; Muandet et al., 2013) commonly uses several source domains to train a learner that can generalize to an unseen target domain. The underlying assumption is that there exists a latent domain invariant feature space across source domains and unseen target domain. To learn the domain invariant features, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b) explicitly align distributions of different source domains in feature space. (Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019) split source domains into meta-train and meta-test to simulate domain shift and train learner in a meta-learning approach. (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Ryu et al., 2020) augment images or features to train learner to enhance generalization capability.
Conventional domain generalization methods assume that the domain labels are available. But in a more realistic scenario, the domain labels may be unknown (Wang et al., 2019). To handle this domain-free setting, Carlucci et al. (2019) combines supervised learning and self-supervised learning to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and trains a domain invariant feature extractor via adversarial training. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Another line of works (Volpi et al., 2018; Qiao et al., 2020) tackle the single source setting that the training set comprises a single domain, and the train and test data are from different domains.
In this work, we focus on a general learning scenario of domain generalization as follows. First, we do not know the domain label of each data and do not assume that there are several domains in the training dataset. Second, we do not assume that the training and test data are from different domains (e.g., styles). However, the previous domain-free DG methods (Matsuura & Harada, 2020) commonly evaluate on the datasets (e.g., PACS) composed of several domains though they do not use domain labels in training.
In our domain-free setting, we do not assume and know the domains in the training dataset, we therefore model domain generalization as a learning problem that the learner should be able to generalize well for any splitting of train/val subsets, i.e., synthetic source/target domains, over the training dataset. This explicitly enforces that the trained learner should be generalizable for any possible domain shifts within the training dataset.
To achieve this goal, we propose an adversarial splitting model that is a min-max optimization problem, due to the difficulty of enumerating all splittings. In this min-max problem, we adversarially split training dataset to train/val subsets by maximizing the domain shift between them based on the given learner, and then update learner by minimizing the prediction error on val-subset using meta-learning approach given the splitting. By optimizing this min-max problem, we enforce the learner to generalize well even in the worst-case splitting. We also investigate L2-normalization of features in our domain generalization method. It is surprisingly found that L2-normalization can improve performance of learner and mitigate gradient explosion in the meta-learning process of DG. We further theorectically analyze the underlying reasons for this finding. This proposed domain generalization approach is dubbed Domain-Free Adversarial Splitting, i.e., DFAS.
To verify the effectiveness of our method, we conduct extensive experiments on benchmark datasets of PACS, Office-Home and CIFAR-10 under different settings with multiple/single source domains. In experiments that the training data are from several source domains, our method achieves state-ofthe-art results on both PACS and Office-Home datasets. We also find that our method significantly outperforms baselines in experiments that the training data are from a single source domain on PACS and CIFAR-10. We also confirm the effectiveness of our method by ablation study.
Based on domain adaptation theory, we also derive an upper bound of the generalization error on unseen target domain. We analyze that the terms in this upper bound are implicitly minimized by our method. This theoretical analysis partially explains the success of our method.
2 RELATED WORKS
We summarize and compare with related domain generalization (DG) methods in two perspectives, i.e., DG with domain labels and DG without domain labels.
DG with domain labels. When the domain labels are available, there are three categories of methods for DG. First, (Muandet et al., 2013; Ghifary et al., 2015; Li et al., 2018b; Piratla et al., 2020) learn domain invariant features by aligning feature distributions or by common/specific feature decomposition. Second, (Li et al., 2019a; Balaji et al., 2018; Li et al., 2019b; 2018a; Dou et al., 2019; Du et al., 2020a;b) are based on meta-learning approach that splits given source domains into meta-train and meta-test domains and trains learner in an episodic training paradigm. Third, (Shankar et al., 2018; Carlucci et al., 2019; Zhou et al., 2020; Wang et al., 2020) augment fake domain data to train learner for enhancing generalization capability of learner. Our method may mostly relate to the above second category of methods. But differently, we consider the DG problem in domain-free setting and adversarially split training dataset to synthesize domain shift in a principled min-max optimization method, instead of using leave-one-domain-out splitting in these methods.
DG without domain labels. When the domain label is unavailable, to enhance generalization ability of learner, Wang et al. (2019) extracts robust feature representation by projecting out superficial patterns like color and texture. Carlucci et al. (2019) proposes to solve jigsaw puzzles of the training images. Matsuura & Harada (2020) divides samples into several latent domains via clustering and learns domain invariant features via adversarial training of feature extractor and domain discriminator. Huang et al. (2020) discards the dominant activated features, forcing the learner to activate remaining features that correlate with labels. Volpi et al. (2018) and Qiao et al. (2020) propose adversarial data augmentation to tackle the setting that the training set comprises a single domain. In methodology, these methods either explicitly force the learner to extract robust features (Wang et al., 2019; Matsuura & Harada, 2020; Huang et al., 2020) or augment new data to increase training data (Carlucci et al., 2019; Qiao et al., 2020; Volpi et al., 2018). While our method is a novel meta-learning approach for DG by introducing adversarial splitting of training dataset during training, without relying on data/domain augmentation.
3 METHOD
In our setting, since we do not assume and know the domains in the training dataset, the training data could be independently sampled from several underlying source domains or just from a single source domain. We denote S = {(xi, yi)}Ni=1 as the training dataset. Our goal is to train the learner with S that can generalize well on an unseen target domain.
In the following sections, we introduce details of our proposed model in Sect. 3.1, followed by its optimization method in Sect. 3.2. We also investigate L2-normalization for domain generalization in Sect. 3.3. Theoretical analysis for our method is presented in Sect. 4. Experimental results are reported in Sect. 5. Sect. 6 concludes this paper.
3.1 DOMAIN-FREE ADVERSARIAL SPLITTING MODEL
As mentioned in Sect. 1, we model DG as a learning problem that enforces the learner to be able to generalize well for any train/val subsets splitting of the training dataset. The learner is trained using meta-learning approach (Finn et al., 2017). To formulate our idea mathematically, we first introduce some notations. We denote f as a function/learner (f could be a deep neural network, e.g., ResNet (He et al., 2016)) that outputs classification score of the input image, l as the loss such as cross-entropy, St and Sv as the train-subset and val-subset respectively such that S = St ∪ Sv and St ∩ Sv = ∅. The formulated optimization problem for domain generalization is
min w
1 |Γξ| ∑ Sv∈Γξ L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(1)
In Eq. (1), Γξ = {Sv : Sv ⊂ S, |Sv| = ξ} is the set of all possible val-subsets of S with length of ξ, St = S − Sv is train-subset paired with each Sv , L(θ(w);Sv) = 1|Sv| ∑ (x,y)∈Sv l(fθ(w)(x), y) is the loss on Sv, where θ(w) is the parameters of f , L(θ;St, w) is L(θ;St) with θ initialized by w andR(w) is regularization term. In the optimization model of Eq. (1), the parameter θ(w) of learner trained on St is treated as a function of the initial parameter w. To force the learner trained on St to generalize well on Sv, we directly minimize the loss L(θ(w);Sv), dubbed generalization loss, on val-subset Sv, w.r.t. the parameter θ(w) trained on St. Solving Eq. (1) will force the learner to be able to generalize well from any train-subset to corresponding val-subset.
Since |Γξ| may be extremely large, it is infeasible to enumerate all possible train/val splittings. Thus, we propose the following adversarial splitting model instead,
min w max Sv∈Γξ
L(θ(w);Sv) +R(w)
s.t. θ(w) = arg min θ L(θ;St, w), St = S − Sv.
(2)
In the min-max problem of Eq. (2), the train/val (St/Sv) splitting is optimized to maximize the generalization loss to increase the domain shift between train and val subsets by finding the hardest splitting to the learner. While w is optimized by minimizing the generalization loss of learner over
the splitting. Solving this adversarial splitting optimization model in Eq. (2) enforces the learner to be generalizable even for the worst-case splitting. We therefore expect that the trained learner is robust to the domain shifts within the training dataset. For the regularization termR(w), we set it to be the training loss on St (i.e.,R(w) = L(w;St)), which additionally constrains that the learner with parameter w should be effective on St (Li et al., 2018a). The effect of the hyper-parameter ξ will be discussed in Appendix A.2.
In conventional adversarial machine learning, adversarial training is imposed on adversarial samples and learner to increase robustness of the learner to adversarial corruption (Goodfellow et al., 2015). While in our optimization model of Eq. (2), adversarial training is conducted on data splitting and learner to force the learner to be robust to domain shift between train/val subsets. Our model bridges adversarial training and meta-learning. It is a general learning framework for domain generalization and is a complement to adversarial machine learning.
3.2 OPTIMIZATION
This section focuses on the optimization of Eq. (2). Since Eq. (2) is a min-max optimization problem, we alternately update Sv and w by fixing the other one as known. We should also consider the inner loop for optimization of θ(w) in the bi-layer optimization problem of Eq. (2). We next discuss these updating steps in details. The convergence and computational cost of this algorithm will be also discussed in Appendix A.3 and A.4 respectively.
Inner loop for optimization of θ(w). We adopt finite steps of gradient descent to approximate the minimizer θ(w) of the inner objective L(θ;St) with initial value w. This approximation technique has been introduced in machine learning community several years ago (Sun & Tappen, 2011; Finn et al., 2017; Fan et al., 2018). For convenience of computation, following (Li et al., 2018a; Dou et al., 2019), we only conduct gradient descent by one step as
θ(w) = w − α∇θL(θ;St)|θ=w, (3)
where α is the step size of inner optimization and its effect will be discussed in Appendix A.2.
Optimizingw with fixed Sv . For convenience, we denote gtw = ∇θL(θ;St)|θ=w. Fixing Sv (St is then fixed), w can also be updated by gradient descent, i.e.,
w = w − η∇w ( L(w − αgtw;Sv) +R(w) ) , (4)
where η is the step size of outer optimization.
Finding the hardest splitting Sv with fixed w. Fixing w, to find Sv ∈ Γξ to maximize L(w − αgtw;Sv), we do first order Taylor expansion for L(w−αgtw;Sv) by L(w−αgtw;Sv) ≈ L(w;Sv)− α 〈gtw, gvw〉, where gvw = ∇θL(θ;Sv)|θ=w and 〈·, ·〉 denotes the inner product. From the definition of L, gtw and gvw, the optimization problem of maxSv∈Γξ{L(w;Sv) − α 〈gtw, gvw〉} can be written as maxSv∈Γξ{ 1|Sv| ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), g t w〉}. This problem is equivalent to the following splitting formulation:
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (5)
where we introduced an auxiliary variable A. Eq. (5) can be solved by alternatively updating Sv and A. Given A, we compute and rank the values of l (fw(x), y) − α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv. Given Sv (St is then given), we update A by A = gtw = 1 |St| ∑ (x,y)∈St ∇wl(fw(x), y). We also discuss details and convergence of this alternative iteration in Appendix C. Since computing gradient w.r.t. all parameters is time and memory consuming, we only compute gradient w.r.t. parameters of the final layer of learner f .
3.3 L2-NORMALIZATION FOR EXTRACTED FEATURE
L2-normalization has been used in face recognition (Liu et al., 2017; Wang et al., 2018) and domain adaptation (Saito et al., 2019; Gu et al., 2020), but is rarely investigated in domain generalization. We investigate L2-normalization in domain generalization in this paper. It is found surprisingly in experiments that L2-normalization not only improves the performance of learner (see Sect. 5.4), but
also mitigates gradient explosion (see Sect. 5.5) that occurs frequently during the training of metalearning for DG (Finn et al., 2017; Dou et al., 2019). We next discuss details of L2-normalization in our method and analyze why L2-normalization mitigates gradient explosion.
Feature L2-normalization. The L2-normalization is used as a component of our learner f . Specifically, we decompose f into feature extractor fe (e.g., the convolutional layers of ResNet), the transform fn representing L2-normalization and classifier f c, i.e., f = f c ◦ fn ◦ fe. The feature of input image x extracted by fe is fed to fn to output an unit vector z, i.e., z = fn(fe(x)) = f
e(x) ‖fe(x)‖ .
The classifier f c consists of unit weight vectors W = [w1, w2, · · · , wK ], where K is the number of classes and ‖wk‖ = 1,∀k. f c takes z as an input and outputs the classification score vector σm,s(W T z). σm,s(·) is the marginal softmax function defined by
[σm,s(W T z)]k = exp(s(wTk z −mI{k=y}))∑K k′=1 exp(s(w T k′z −mI{k′=y})) , k = 1, 2, · · · ,K, (6)
where y is the label of x, [·]k indicates the k-th element, I{a} is the indicator function that returns 1 if a is true, 0 otherwise, m and s are hyper-parameters indicating margin and radius respectively.
Analysis of mitigating gradient explosion. We find that L2-normalization mitigates gradient explosion in the training of meta-learning for domain generalization. For the sake of simplicity, we analyze gradient norm of loss w.r.t. parameters of f c in the meta-learning process of domain generalization, with fe as fixed function. Without loss of generality, we consider the case that K = 2 (i.e., binary classification), s = 1 and m = 0. In this case, we have the following proposition. Proposition 1. Under the above setting, if the input feature of f c is L2-normalized, the gradient norm of loss w.r.t. parameters of f c in the meta-learning process of DG is bounded.
Sketch of proof. Given feature z, the loss of binary classification is L(w; z) = −y log(σ(wT z))− (1 − y) log(1 − σ(wT z)), where σ is the sigmoid function. Let w′ = w − α∇wL(w; z), then ∇wL(w′; z) = (I − αH)∇w′L(w′; z), where H is the Hessian matrix. The gradient norm ‖∇wL(w′; z)‖ ≤ ‖I − αH‖ ‖∇w′L(w′; z)‖ ≤ (1 + |α| ‖H‖) ‖∇w′L(w′; z)‖. Since ∇w′L(w′; z) = (p− y)z and H = p(1− p)zzT where p = σ(wT z), ‖H‖ = supu:‖u‖=1 ‖Hu‖ ≤ supu:‖u‖=1
∥∥zzTu∥∥ ≤ ‖z‖2 and ‖∇w′L(w′; z)‖ ≤ ‖z‖. If ‖z‖ = 1, we have ‖∇wL(w′; z)‖ ≤ 1 + |α|. According to Proposition 1, L2-normalization can mitigate gradient explosion under the above setting. The analysis of gradient norm of loss w.r.t. parameters of both f c and fe in the meta-learning process is much more complex, left for our future work.
4 THEORETICAL ANALYSIS
This section presents theoretical understanding of our method. We first derive a generalization error bound on target domain in theorem 1 for the general setting of meta-learning for DG. Then, based on theorem 1, we theoretically explain the reason on the success of our method.
Without loss of generality, we consider binary classification problem. We denoteH as the set of all possible f , i.e., H = {fw : w ∈ RM}, where M is the number of parameters. For any Sv ∈ Γξ and St = S − Sv, we let HSt = {fθ(w) : θ(w) = arg minθ L(θ;St, w), w ∈ RM}. The metalearning approach for DG is to find a function in HSt to minimize classification loss on Sv. Note that, although the training samples in S may be sampled from several distributions, they can still be seen as being i.i.d. sampled from a mixture of these distributions. We next respectively denote P = ∑D d=1 βdPd as the mixture distribution with βd representing the sampling ratio of the d-th source domain, ΨQ(f) = E(x,y)∼Q[I{Ψ(f(x))6=y}] as the generalization error on distribution Q of unseen target domain, ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} as the empirical error on Sv, V C(H) as the VC-dimension of H, and Ψ(·) as the prediction rule such as the Bayes Optimal Predictor. Based on the domain adaptation theory (Ben-David et al., 2007; 2010) and inspired by the analysis in (Saito et al., 2019), we have the following theorem. Theorem 1. Let γ be a constant, assume EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}], then given any Sv ∈ Γξ and St = S−Sv and for any δ ∈ (0, 1), with probability at least 1−2δ, we have ∀f ∈ HSt ,
Table 1: Results of MSDS experiment on PACS based on ResNet18 and ResNet50.
Backbone Target D-SAM JiGen MASF MMLD MetaReg RSC Base-line DFAS (ours)
ResNet18
A 77.3 79.4 80.3 81.3 83.7 83.4 80.2±0.4 84.2±0.1 C 72.4 75.3 77.2 77.2 77.2 80.3 75.5±0.5 79.5±0.3 P 95.3 96.0 95.0 96.1 95.5 96.0 95.9±0.1 95.8±0.1 S 77.8 71.4 71.7 72.3 70.3 80.9 70.1±0.9 82.1±0.4
Avg 80.7 80.5 81.0 81.8 81.7 85.1 80.4 85.4
ResNet50
A - - 82.9 - 87.2 87.9 86.1±0.2 89.1±0.1 C - - 80.5 - 79.2 82.2 79.2±0.4 84.6±0.2 P - - 95.0 - 97.6 97.9 97.6±0.1 96.8±0.2 S - - 72.3 - 70.3 83.5 70.3±0.7 85.6±0.3
Avg - - 82.7 - 83.6 87.8 83.3 89.0
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
ξ
( C2 + 4
δ
) + C3, (7)
where B(Sv) = C1 − inf
f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}, (8)
C1 = supS′v∈Γξ supf ′∈HS−S′v EQ[I{l(f ′(x),y)>γ}], C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl S−S′v ) , C3 ≥ supS′v∈Γξ inff ′∈HS−S′v { Ψl P (f ′) + ΨlQ (f ′)}, HΨlS−St = {Ψl ◦ f : f ∈ HS−St}, Ψl is a loss-related indicator defined by
Ψl(f(x)) = { 1 if l(f(x), y) > γ. 0 otherwise .
(9)
Proof is given in Appendix D. The assumption of EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}] in theorem 1 is realistic because the data of Q is not accessed at training time, and the learner trained on data of P should have smaller classification loss on P thanQ. In theorem 1, C1, C2, C3 are constants to f . In Eq. (7), the generalization error ΨlQ (f) on Q can be bounded by the empirical error ̂ Ψl Sv
(f) on Sv, the term B(Sv) that measures the discrepancy between P and Q, and the last two constant terms in Eq. (7).
To obtain lower ΨlQ (f), we need to minimize ̂ Ψl Sv (f) and B(Sv). Minimizing B(Sv) w.r.t. Sv is equivalent to
max Sv∈Γξ inf f∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f(x),y)>γ}. (10)
Intuitively, Eq. (10) means to find a Sv ∈ Γξ such that the infimum ratio of examples in Sv having loss greater than γ is maximized. This min-max problem of Eq. (10) for computing the error bound bears the similar idea as our min-max problem. Our adversarial splitting model in Eq. (2) can implicitly realize the goal of Eq. (10) and meanwhile ensure lower ̂ΨlSv (f) for any Sv .
The maximization in Eq. (10) corresponds to our adversarial splitting that finds the hardest val-subset Sv for the learner in Eq. (2). The infimum in Eq. (10) corresponds to the minimization of the loss in Eq. (2) on Sv w.r.t. the learner parameterized by θ(w). Instead of using indicator function I in Eq. (10), in our model of Eq. (2), we choose differentiable classification loss for easier optimization.
5 EXPERIMENTS
We verify the effectiveness of our method in three types of experimental settings: Multi Source with Domain Shift (MSDS) that the training data are from several source domains and there exists domain
shift between training and test data, Single Source with Domain Shift (SSDS) that the training data are from a single source domain and there exists domain shift between training and test data, and Same Source and Target Domain (SSTD) that the training and test data are from a same single domain. The source codes will be released online.
We conduct experiments on three benchmark datasets. PACS (Li et al., 2017) contains four domains, including art painting (A), cartoon (C), photo (P), sketch (S), sharing seven classes. OfficeHome (Volpi et al., 2018), a dataset widely used in domain adaptation and recently utilized in domain generalization, consists of four domains: Art (Ar), Clipart (Cl), Product (Pr), Real World (Rw), sharing 65 classes. Both of these two datasets are utilized to conduct experiments in settings of MSDS and SSDS. CIFAR-10 (Krizhevsky et al., 2009) is taken for the experimental setting of SSTD.
5.1 TYPE I: MULTI SOURCE WITH DOMAIN SHIFT (MSDS)
In the setting of MSDS, following (Carlucci et al., 2019), we use leave-one-domain-out crossvalidation, i.e., training on three domains and testing on the remaining unseen domain, on PACS and Office-Home. Note that the domain labels are not used during training. We adopt ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. Full implementation details are reported in Appendix B.
We compare our method with several state-of-the-art methods, including D-SAM (D’Innocente & Caputo, 2018), JiGen (Carlucci et al., 2019), MASF (Dou et al., 2019), MMLD (Matsuura & Harada, 2020), MetaReg (Balaji et al., 2018), RSC (Huang et al., 2020). The results on PACS and Office-Home are reported in Table 1 and Table 2 respectively. Our DFAS achieves state-of-the-art results based on both ResNet18 and ResNet50 on both PACS (85.4%, 89.0%) and Office-Home (64.3%, 70.3%), outperforming RSC by 0.3% and 1.2% on PACS based on ResNet18 and ResNet50 respectively, and by 1.2% on Office-Home based on ResNet18 (these methods do not conduct experiment on Office-Home using ResNet50). Compared with Baseline that directly aggregates three source domains to train learner with standard fully-connected layer as classifier f c, our method of DFAS improves its performance by 5.0% and 5.7% on PACS based on ResNet18 and ResNet50 respectively, and by 2.1% and 2.1% on Office-Home based on ResNet18 and ResNet50 respectively. In Table 1, on PACS, DFAS significantly outperforms Baseline in almost all tasks except when P is taken as target domain. Note that, in the task that domain S, of which the style is extremely different from rest three domains, is target domain, our DFAS boosts the accuracy of Baseline by 12.0% and 15.3% based on ResNet18 and ResNet50 respectively. This indicates that our method can generalize well when domain shift is large. Office-Home is challenging for DG since the number of classes is larger than other datasets. As shown in Table 2, our DFAS outperforms Baseline stably in almost all tasks on Office-Home. These performance improvements demonstrate the effectiveness of our method in the case that training data are from multi-source domains and the unseen target domain is different from source domains.
5.2 TYPE II: SINGLE SOURCE WITH DOMAIN SHIFT (SSDS)
We conduct this type of experiment on PACS based on ResNet18. In this experiment, we train learner on one domain and test on each of the rest three domains, resulting in total 12 tasks. Implementation details are shown in Appendix B. Our method is compared with related methods, including Baseline that directly trains learner with standard fully-connected layer as classifier f c on the source domain, Jien (Carlucci et al., 2019) and SagNet (Nam et al., 2019). Results are reported in Table 3. Our method of DFAS outperforms Baseline and SagNet by 4.2% and 4.7% respectively. We observe that our method outperforms Baseline in 10 tasks among all 12 tasks. The performance boosts are large in tasks when domain S is set to be source domain. These performance improvements demonstrate the effectiveness of our method in the case that training data are from single source domain and the unseen target domain is different from source domain.
5.3 TYPE III: SAME SOURCE AND TARGET DOMAIN (SSTD)
We also apply our DG method to the common recognition task that the training and test data are from a same domain, i.e., SSTD, on CIFAR-10 dataset. To investigate the effect of training size, we sample different sizes of training data from the provided training set (i.e., source domain). Implementation details are in Appendix B. As shown in Table 4, our DFAS outperforms Baseline, JiGen and MMLD by 2.4%, 3.6% and 4.3% respectively on average. The results of JiGen and MMLD are obtained by running their codes on CIFAR-10. We observe that DFAS outperforms Baseline and compared methods in all different numbers of training data. In general, the performance boost is larger when the number of training data is smaller. This may be because the learner is more possible to be overfitting when the training size is smaller and our DFAS is designed to extract better generalizable features.
5.4 ABLATION STUDY
To further verify the effectiveness of each component of our method, we conduct additional ablation experiments on PACS dataset based on ResNet18 in both MSDS and SSDS setting. The results are reported in Table 5 and Table 6.
In Table 5 and 6, L2-norm denotes feature L2-normalization defined in Sect.3.3. Rand-split means the random splitting strategy that we randomly split the train/val subsets at each step of updating the parameters of learner. Adv-split denotes the adversarial splitting model that we update the worst-case splitting by solving the maximization problem in Eq. (5) per epoch in training process.
Effectiveness of L2-normalization. In Table 5, L2-norm (82.7%) outperforms Baseline (80.4%) by 2.3% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms Adv-split (83.6%) by 1.8% in MSDS setting. In Table 6, L2-norm (63.1%) outperforms Baseline (62.4%) by 0.7% and L2norm + Adv-split (i.e., DFAS) (66.6%) outperforms Adv-split (65.1%) by 1.5% in SSDS setting.
These performance improvements demonstrate that the feature L2-normalization is useful in domain generalization.
Effectiveness of adversarial splitting model. In Table 5, Adv-split (83.6%) outperforms Baseline (80.4%) by 3.2% and L2-norm + Adv-split (i.e., DFAS) (85.4%) outperforms L2-norm (82.7%) by 2.7% in MSDS setting. In Table 6, Adv-split (65.1%) outperforms Baseline (62.4%) by 2.7% and L2-norm + Adv-split (i.e., DFAS) (66.6%) outperforms L2-norm (63.1%) by 3.5% in SSDS setting. These results indicating that our proposed adversarial splitting model is effective.
Effectiveness of adversarial splitting over random splitting. In Table 5, L2-norm + Adv-split (85.4%) outperforms L2-norm + Rand-split (84.3%) by 1.1% and Adv-split (83.6%) outperforms Rand-split (82.8%) by 0.8% in MSDS setting. In Table 5, L2-norm + Adv-split (66.6%) outperforms L2-norm + Rand-split (65.4%) by 1.2% and Adv-split (65.1%) outperforms Rand-split (64.1%) by 1.0% in SSDS setting. These results demonstrate that the adversarial splitting model outperforms the random splitting strategy in different experimental settings.
Due to space limit, we add more ablation experiments in Appendix A.1 to further compare different splittings, including adversarial splitting, domain-label-based splitting and random splitting.
5.5 MITIGATING GRADIENT EXPLOSION BY L2-NORMALIZATION.
To show that L2-normalization can mitigate gradient explosion, we conduct the same experiments independently for 50 times respectively with L2-normalization and without L2-normalization. Then we count the numbers of occurrences of gradient explosion that are reported in Table 7. From Table 7, we can observe that L2-normalization can mitigate gradient explosion.
6 CONCLUSION
In this paper, we unify adversarial training and meta-learning in a novel proposed Domain-Free Adversarial Splitting (DFAS) framework to tackle the general domain generalization problem. Extensive experiments show the effectiveness of the proposed method. We are interested in deeper theoretical understanding and more applications of our method in the future work.
A ANALYSIS
A.1 COMPARISON OF ADVERSARIAL SPLITTING AND DOMAIN-LABEL-BASED SPLITTING
In the section, we compare our adversarial splitting with domain-label-based splitting that is commonly used in meta-learning-based DG methods. Due to variations of style, poses, sub-classes, etc., the internal inconsistency within dataset is complicated. Domain-label partially capture the inconsistency, while cannot cover all possible internal inconsistency. Our adversarial splitting method does not rely on the domain label. It iteratively finds the hardest train/val splitting to the learner to maximize the inconsistency and train the learner to generalize well for the hardest splitting, in an adversarial training way. This strategy more flexibly investigates the possible inconsistency within training dataset, adaptively to the learner, and can potentially enhance the generalization ability of learner.
We first empirically show that the domain-label-based splitting (denoted as Label-split) is not as hard as our adversarial splitting (Adv-split) to the learner in Table 8. In Table 8, we report the values of objective function in Eq. (5) of Adv-split and Label-split by fixing the learner with different network parameters wi at different epoch (1th, 2th, 5th and 10th) in the training process. Larger value in the table indicates that the splitting is harder to the learner (i.e., network). It can be observed that the domain-label-based splitting (Label-split) is not as hard as Adv-split to learner.
We also conduct experiments on PACS in MSDS setting to fairly compare different splittings, including adversarial splitting (Adv-split), domain-label-based splitting (Label-split) and random splitting (Rand-split). The results are reported in Table 9. Table 9 shows that adversarial splitting outperforms random splitting and domain-label-based splitting when training data is from multiple domains.
When the training data are from only a single domain, our adversarial splitting also performs well (as in Table 3). However, domain-label-based splitting cannot be used in this setting, since there is no domain label available.
A.2 EFFECT OF HYPER-PARAMETERS
Effect of hyper-parameter ξ. In Fig. 1(a), we show the performance of our method when varying the hyper-parameter ξ, i.e., length of the val-subset Sv in adversarial splitting of training dataset. The best result is obtained when ξ = |S|2 , and the results are similar when ξ |S| ranges from 0.3 to 0.7.
Effect of hyper-parameter α. We evaluate the effect of α in MSDS setting on PACS dataset in Fig. 1(b). From Fig. 1(b), the ACC is stable to the values of α in large range of 1e-6 to 1e-4. Small α results in small step-size for parameter updating in meta-learning framework, and limits the benefits from meta-learning and adversarial splitting. Larger α results in larger step-size for gradient descent based network updating, which may fail to decrease the training loss from the optimization perspective.
Effect of hyper-parameter m. The effect of m is evaluated in MSDS setting on PACS dataset in Fig. 1(c). Fig. 1(c) shows that the result is not sensitive to the value of m.
A.3 CONVERGENCE
We testify the convergence of DFAS with errors and losses in different tasks in Fig. 2. In Fig. 2(a) and Fig. 2(b) , we show the classification error curves on target domains (A and Ar respectively) in the setting of MSDS. In Fig. 2(c), we show the training loss of DFAS in task A in MSDS setting. These training curves indicates that DFAS converges in the training process. We also observe that DFAS has better stability than Baseline, in Fig. 2(a) and Fig. 2(b).
A.4 COMPUTATIONAL COST OF ADVERSARIAL SPLITTING AND RANDOM SPLITTING
We compare the computational cost of adversarial splitting and random splitting in this section. Since we only update the worst-case splitting per epoch, instead of at each step of updating parameters, the computational cost is only slightly higher than that of random splitting. To show this, we compare the total training times of the adversarial splitting and random spitting in the same number of steps (20000), as in Table 10.
From Table 10, the training time of Adv-split is only 5.6% (0.33/5.90) higher than Rand-split.
A.5 VISUALIZATION OF FEATURE SPACE
We visualize the feature space learned by our method of DFAS and Baseline (shown in Fig. 3), by t-SNE (Maaten & Hinton, 2008). It appears that DFAS yields better separation of classes and better alignment of distributions of source and unseen target domains, which possibly explains the accuracy improvements achieved by our DFAS.
B IMPLEMENTATION DETAILS
For the setting of MSDS, we use ResNet18 and ResNet50 (He et al., 2016) pre-trained on ImageNet (Russakovsky et al., 2015). For each of them, the last fully-connected layer is replaced by a bottleneck layer, then the corresponding network is taken as feature extractor fe. The dimension of the bottleneck layer is set to be 512 when the backbone is ResNet18 as in (Saito et al., 2019), and 256 when the backbone is ResNet50 as in (Gu et al., 2020). Following (Gu et al., 2020), s is set to 7.5 for PACS and 10.0 for Office-Home. m is set to 0.2 for PACS and 0.1 for Office-Home. ξ is set to |S|2 . SGD with momentum of 0.9 is utilized to update parameters of learner. The learning rate of classifier and bottleneck layer is 10 times of convolutional layers, which is widely adopted in domain adaptation (Long et al., 2015; Ganin et al., 2016). Following (Ganin et al., 2016), the learning rate of convolutional layer is adjusted by η = 0.001(1+10p)0.75 , where p is the optimizing progress linearly changing from 0 to 1. The learning rate α of inner loop optimization is set to 10−5. The parameters are updated for 20000 steps and the hardest val-subset is updated per 200 steps. The batchsize is set to 64. The running mean and running variance of Batch Normalization (BN) layers are fixed as the pre-trained values on ImageNet during training, which is discussed in (Du et al., 2020a). Due to memory limit, when implementing experiments based on ResNet50, we adopt the first order approximation (Finn et al., 2017) that stops the gradient of gtw in Eq. (4) for reducing memory and computational cost.
For the setting of SSDS, we conduct experiment based on ResNet18 on PACS. The implementation details are same as MSDS. For the setting of SSTD, we conduct experiment based on ResNet18 on CIFAR-10. The hyper-parameters of s andm are set to 8.0 and 0.2 respectively. Other implementation details are same as MSDS except that, in BN layers, the running mean and running variance are updated.
We implement experiments using Pytorch (Paszke et al., 2019) on a single NVIDIA Tesla P100 GPU.
C OPTIMIZATION ALGORITHM FOR FINDING THE HARDEST Sv
C.1 OPTIMIZATION ALGORITHM
To solve the problem of
max Sv,A ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 s.t. A = gtw, Sv ∈ Γξ, (11)
we design an alternative iteration algorithm in Sect. 3.2. Specifically, we initialize A with gradient of a sample randomly selected from S. Then we alternatively update Sv and A.
Given A, Sv is updated by solving
max Sv ∑ (x,y)∈Sv l (fw(x), y)− α 〈∇wl(fw(x), y), A〉
s.t. Sv ⊂ S, |Sv| = ξ, (12)
where the constraints are derived from the definition of Γξ. Equation (12) indicates that the optimal Sv consists of ξ samples that have the largest values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉. Thus we compute and rank the values of l (fw(x), y)− α 〈∇wl(fw(x), y), A〉 for all (x, y) ∈ S and select the largest ξ samples to constitute the Sv .
Given Sv (St is then given), we update A to satisfy the constraint A = gtw in Eq. (11), then A is
A = gtw = 1 |St| ∑
(x,y)∈St
∇wl(fw(x), y). (13)
C.2 CONVERGENCE IN EXPERIMENTS
We show empirically the convergence of this alternative iteration algorithm in Fig. 4, with the values of objective function in Eq. (11). Figure 4 shows that the values of objective function converges after only a few iterations.
We also check if the splitting changes when the value of objective function converges. To do this, we count the ratio of changed sample indexes in Sv at each iteration, as in Table 11. Table 11 shows that the splitting is not changed when the value of objective function converges.
C.3 TOY EXAMPLE.
We present a toy example in this section to check if this algorithm can find the optimal solution. The toy example is an 2-dimensional classification problem, as shown in Fig. 5. Different colors indicate different classes. A fully-connected layer without bias is used as the network (learner). We split the data of the first class (blue points) with our algorithm. The candidate solutions with corresponding objective function values are given in Table 12.
The solutions in the iteration process of our algorithm are reported in Table 13. The solutions converge to (1,3,4), which is the optimal solution in Table 12. This indicates that our algorithm can find the
optimal splitting for the toy example. We also give the code of this toy example as below, and the reader may rerun it to verify the results.
Code the toy example:
import torch import numpy as np import matplotlib.pyplot as plt from sklearn.datasets import make_blobs
##### np.random.seed(1) data = make_blobs(n_samples=12,centers=2) x1 = data[0][data[1]==0] x2 = data[0][data[1]==1]
plt.plot(x1[:,0],x1[:,1],’*’) plt.plot(x2[:,0],x2[:,1],’*’) t = 0 for x in x1:
plt.text(x[0],x[1],str(t)) t+=1
plt.savefig(’samples.pdf’)
### model = torch.nn.Linear(2,2,bias=False).cuda() torch.nn.init.xavier_uniform_(model.weight) log = open(’log.txt’,’w’) log.write(’weight:’+ str(model.weight.cpu().data.numpy())+’\n’)
feat = torch.Tensor(data[0]).cuda() label = torch.LongTensor(data[1]).cuda() loss_fun = torch.nn.CrossEntropyLoss()
Grads = [] Loss = [] for i in range(len(feat)):
out = torch.nn.functional.linear(feat[i].view(1,-1),weight=model.weight,bias=None) loss_x = loss_fun(out,label[i].view(-1,))
grad = torch.autograd.grad(loss_x,model.weight)[0] Loss.append(loss_x.cpu().data.numpy()) Grads.append(grad.cpu().data.numpy())
##we split the data of the first class Loss = np.array(Loss)[data[1] == 0] Grads = np.array(Grads)[data[1] == 0] # # np.save(’toy_Grads.npy’,Grads) # np.save(’toy_Loss.npy’,Loss)
# Loss = np.load(’toy_Loss.npy’)[data[1] == 0] # Grads = np.load(’toy_Grads.npy’)[data[1] == 0] alpha = 0.001
def adv_loss(val_index=[0,1,2]): train_index = np.delete(np.arange(len(Loss)),val_index) loss = np.mean(Loss[val_index]) grad_val = np.mean(Grads[val_index],axis=0) grad_train = np.mean(Grads[train_index],axis=0)
return loss - alpha*np.sum(grad_val*grad_train)
#####brute force searching
Solutions = [] Values = []
#generate solutions for i in range(len(Loss)-2):
for j in range(i+1,len(Loss)-1): for k in range(j+1,len(Loss)):
Solutions.append([i,j,k]) for val_idex in Solutions:
Values.append(adv_loss(val_idex))
optimal_idx = np.array(Values).argmax() optimal_solution = Solutions[optimal_idx] optimal_value = max(Values)
print("all possible solutions:",Solutions) print("objective function values of possible solutions:",Values) print("optimal solution:",optimal_solution,"\nobjective function value
of optimal solution:",optimal_value)
log.write(str({"all possible solutions":Solutions, "objective function values of possible solutions":Values, "optimal solution":optimal_solution, "objective function value of optimal solution":optimal_value}))
###### our algorithm A = Grads[np.random.randint(len(Loss))] values_tmp = [] solutions_tmp = [] mtr_index =
np.random.choice(np.arange(len(Loss)),size=len(Loss)//2,replace=False) l = np.mean(Loss- alpha*np.sum(Grads*A.reshape((1,2,2)),axis=(1,2))) values_tmp.append(l) solutions_tmp.append(np.delete(np.arange(len(Loss)),mtr_index)) for i in range(5):
D = np.sum(Grads*A,axis=(1,2)) Loss_ = Loss-alpha*D idx_sort = np.argsort(Loss_)
mtr_index = idx_sort[:len(Loss) // 2] mte_index = idx_sort[len(Loss) // 2 :]
values_tmp.append(Loss_[mte_index].mean()) solutions_tmp.append(mte_index) A = np.mean(Grads[mtr_index],axis=0)
print("our optimal solution:",solutions_tmp[-1],"\nobjective function value of our optimal solution:",values_tmp[-1]) log.write(str({"our optimal solution":solutions_tmp[-1], "objective function value of our optimal solution":values_tmp[-1]})) print(solutions_tmp,values_tmp)
D PROOF OF THEOREM 1
We first introduce VC-dimension based generalization bound and domain adaptation theory in Appendix D.1, then present two lemmas in Appendix D.2, and finally give the proof of Theorem 1 in Appendix D.3.
D.1 PRELIMINARY
VC-dimension based generalization bound. Theorem A-1. (Abu-Mostafa et al., 2012) Let S be the set of training data i.i.d. sampled for distribution P . For any δ ∈ (0, 1), with probability at least 1− δ, we have ∀h (h : X → {0, 1}) in hypothesis spaceH,
| P(h)− ̂S(h)| ≤
√ 8
|S|
( V C(H) log 2e |S|
V C(H) +
4
δ
) . (14)
where P(h) = E(x,y)∼P [I{(h(x)) 6=y}] and ̂S(h) = 1|S| ∑ (x,y)∈S I{(h(x))6=y}.
Domain adaptation theory.
Theorem A-2. (Ben-David et al., 2007; 2010) For any h in hypothesis spaceH, we have
Q(h) ≤ P(h) + 1
2 dH(P,Q) + λ∗, (15)
where λ∗ ≥ infh′∈H{ P(h′) + Q(h′)} and
dH(P,Q) = 2 sup h∈H |EP [h = 1]− EQ[h = 1]| (16)
isH-divergence.
D.2 LEMMAS
Lemma A-1. For any Sv ∈ Γξ and St = S − Sv, ∀δ ∈ (0, 1), with probability at least 1 − δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| ≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) , (17)
where ΨP(f) = E(x,y)∼P [I{Ψ(f(x)) 6=y}] is generalization error on distribution P , ̂ΨSv (f) = 1 |Sv| ∑ (x,y)∈Sv I{Ψ(f(x)) 6=y} is empirical error, H Ψ St
= {Ψ ◦ f : f ∈ HSt}, V C(HΨSt) is the VC-dimension of HΨSt , and Ψ(·) is the prediction rule such as the Bayes Optimal Predictor, i.e., Ψ(f(x)) = I{f(x)≥ 12}.
Proof:
From the definition of HΨSt , for any f ∈ HSt , there exists a hf ∈ H Ψ St such that hf = Ψ ◦ f . Applying Theorem A-1, with probability at least 1− δ, we have ∀f ∈ HSt ,
| ΨP(f)− ̂ΨSv (f)| =| P(hf )− ̂Sv (hf )|
≤ √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(18)
Lemma A-2. For any Sv ∈ Γξ and St = S − Sv, let ΨP(g) = inff∈HSt Ψ P(f) and ̂ Ψ Sv (h) = inff∈HSt ̂ Ψ Sv (f), then ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g) ≥ ̂ΨSv (h)− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) . (19)
Proof:
From the definition of g and h, we have ̂ΨSv (g) ≥ ̂ Ψ Sv (h). ∀δ ∈ (0, 1), with probability at least 1− δ, we have
ΨP(g)− ̂ΨSv (h) = Ψ P(g)− ̂ΨSv (g) + ̂ Ψ Sv (g)− ̂ Ψ Sv (h)
≥ ΨP(g)− ̂ΨSv (g)
≥− √√√√ 8 |Sv| ( V C(HΨSt) log 2e |Sv| V C(HΨSt) + 4 δ ) .
(20)
Thus, Eq. (19) holds.
D.3 PROOF OF THEOREM 1
Proof:
We denote byHΨl the hypothesis space such that ∀h ∈ HΨl ,
h(x) = Ψl(f(x)) = { 1 if l(f(x), y) > γ, 0 otherwise ,
(21)
for f ∈ H. Then
dHΨl (P,Q) = 2 sup h∈HΨl ∣∣EP [h = 1]− EQ[h = 1]∣∣ = 2 sup
f∈H ∣∣EP [Ψl(f(x)) = 1]− EQ[Ψl(f(x)) = 1]∣∣ = 2 sup
f∈H ∣∣∣EP [I{l(f(x),y)>γ}]− EQ[I{l(f(x),y)>γ}]∣∣∣ = 2 sup
f∈H
{ EQ[I{l(f(x),y)>γ}]− EP [I{l(f(x),y)>γ}] } ≤ 2 sup
f∈H EQ[I{l(f(x),y)>γ}]− 2 inf f∈H EP [I{l(f(x),y)>γ}].
(22)
In the fourth equation, we utilize the assumption that EQ[I{l(f(x),y)>γ}] ≥ EP [I{l(f(x),y)>γ}]. Given any Sv ∈ Γξ and St = S − Sv , we replaceH byHSt , then
dHΨlSt (P,Q) ≤2 sup f∈HSt EQ[I{l(f(x),y)>γ}]− 2 inf f∈HSt EP [I{l(f(x),y)>γ}]
≤2C1 − 2 inf f∈HSt
EP [I{l(f(x),y)>γ}] (23)
where C1 = supS′v∈Γξ supf∈HS−S′v EQ[I{l(f(x),y)>γ}]. Applying Theorem A-2, for any f ∈ HSt , we have ΨlQ (f) ≤ Ψl P (f) + C1 − inf
f ′∈HSt EP [I{l(f ′(x),y)>γ}] + λ∗(Sv), (24)
where λ∗(Sv) ≥ inff ′∈HS−Sv { Ψl P (f ′) + ΨlQ (f ′)}. Let C3 = supS′v∈Γξ λ ∗(S′v) ≥ supS′v∈Γξ inff ′∈HS−S′v { ΨlP (f ′) + Ψl Q (f ′)}, we have
ΨlQ (f) ≤ Ψl P (f) + C1 − inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] + C3. (25)
Applying Lemma A-1 to the first term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have ∀f ∈ HSt ,
ΨlP (f) ≤ ̂ Ψl Sv (f) + √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) . (26)
Applying Lemma A-2 to the third term of right side in Eq. (25), ∀δ ∈ (0, 1), with probability at least 1− δ, we have
inf f ′∈HSt EP [I{l(f ′(x),y)>γ}] ≥ inf f ′∈HSt
1 |Sv| ∑
(x,y)∈Sv
I{l(f ′(x),y)>γ}
− √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) .
(27)
Combining Eq. (25), (26), (27) and thanks to the union bound, for any δ ∈ (0, 1), with probability at least 1− 2δ, we have ∀f ∈ HSt ,
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2 √√√√ 8 |Sv| ( V C(HΨlSt ) log 2e |Sv| V C(HΨlSt ) + 4 δ ) + C3, (28)
where B(Sv) = C1 − inff ′∈HSt 1 |Sv| ∑ (x,y)∈Sv I{l(f ′(x),y)>γ}. Using the fact that |Sv| = ξ and let C2 = supS′v∈Γξ V C(H Ψl S−S′v ) log 2eξ V C(HΨl
S−S′v ) , we have
ΨlQ (f) ≤ ̂ Ψl Sv (f) +B(Sv) + 2
√ 8
|Sv|
( C2 + 4
δ
) + C3. (29) | 1. What is the focus of the paper regarding domain-free generalization?
2. What are the strengths of the proposed method, particularly in maximizing domain shift?
3. What are the weaknesses of the paper, especially regarding the effectiveness and efficiency of the method?
4. Do you have any questions regarding the comparison with other methods, such as MASF and MetaReg?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This paper proposes to unify adversarial training and meta-learning in domain-free generalization where labels of source domains are unavailable. To maximize the domain shift between the subsets of meta-train and meta-val, adversarial training is leveraged to find the worst-case train/val splits. Extensive experiments on benchmark datasets under different settings demonstrate the effectiveness of the proposed method.
Pros:
The idea of adversarial train/val splitting in meta-learning is interesting.
The paper provides extensive experiments on benchmark datasets under different settings with multiple/single source domains and achieves state-of-the-art results on both PACS and Office-Home datasets.
The paper provides an upper bound of the generalization error on unseen target domains, which is implicitly minimized by the proposed method.
Major Cons:
The effectiveness of adversarial train/val splitting is not well justified. Typically, meta-learning based methods (MASF and MetaReg) have no overlap between domains used for meta-train and meta-val. This is how the domain discrepancy between meta-train and meta-val can be modeled. However, DFAS has the issue that the same domain may be used in both meta-train and meta-val, which significantly limits the domain transportation and may yield limited domain generalization capacity in comparison with MASF and MetaReg. So why DFAS outperforms MASF and MetaReg in terms of domain generalization? More experiments and discussion are suggested to justify this point.
The difference between adversarial splitting and domain-label-based splitting is unclear. Additional experiments are suggested to empirically show the difference.
The proposed adversarial train/val splitting is computationally expensive since it needs to rank all samples in each iteration. According to the ablation results on PACS (Tab.5), improvements over randomly selected train/val splitting seem quite limited. More details about the convergence of alternative iteration should be provided.
Fig.4 shows the objective function remains unchanged after the 3rd iteration. Will the splitting results be changed after this stage?
Minor Cons:
It would be better to see an ablation study on the effect of margin m in Eq.6.
Standard deviation of the reported results should be provided.
After rebuttal: It is highly appreciated that the authors provide additional evidence in response to my reviews. My previous concerns about effectiveness and efficiency are addressed to some extent, however, this also means the original submission needs significant modification to address the concerns. I like the idea in general and the problem is well-motivated but it needs more work for a complete version. I would encourage the authors to further improve the quality of the paper. |
ICLR | Title
Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework
Abstract
Enforcing orthogonality in convolutional neural networks is an antidote for gradient vanishing/exploding problems, sensitivity to perturbation, and bounding generalization errors. Many previous approaches for orthogonal convolutions enforce orthogonality on its flattened kernel, which, however, do not lead to the orthogonality of the operation. Some recent approaches consider orthogonality for standard convolutional layers and propose specific classes of their realizations. In this work, we propose a theoretical framework that establishes the equivalence between diverse orthogonal convolutional layers in the spatial domain and the paraunitary systems in the spectral domain. Since 1D paraunitary systems admit a complete factorization, we can parameterize any separable orthogonal convolution as a composition of spatial filters. As a result, our framework endows high expressive power to various convolutional layers while maintaining their exact orthogonality. Furthermore, our layers are memory and computationally efficient for deep networks compared to previous designs. Our versatile framework, for the first time, enables the study of architectural designs for deep orthogonal networks, such as choices of skip connection, initialization, stride, and dilation. Consequently, we scale up orthogonal networks to deep architectures, including ResNet and ShuffleNet, substantially outperforming their shallower counterparts. Finally, we show how to construct residual flows, a flow-based generative model that requires strict Lipschitzness, using our orthogonal networks.
1 INTRODUCTION
Convolutional neural networks (CNNs), whose deployment has witnessed extensive empirical success, still exhibit a range of limitations that are not thoroughly studied. Firstly, deep convolutional networks are in general difficult to learn, and their high performance heavily relies on techniques that are not fully understood, such as skip-connections (He et al., 2016a), batch normalization (Ioffe & Szegedy, 2015), specialized initialization (Glorot & Bengio, 2010). Secondly, they are notoriously sensitive to imperceptible perturbations, including adversarial attacks (Goodfellow et al., 2014) or geometric transformations (Azulay & Weiss, 2019). Finally, a precise characterization of their generalization property is still under active investigation (Neyshabur et al., 2018; Jia et al., 2019).
Orthogonal networks, which have a “flat” spectrum with all singular values of the layer being 1 (each layer’s output norm ‖y‖ equals its input norm ‖x‖, ∀x), alleviate all problems above. As shown in recent works, by enforcing orthogonality in neural nets, we obtain (1) easier optimization (Zhang et al., 2018a; Qi et al., 2020): since each orthogonal layer preserves the gradient norm during backpropagation, an orthogonal network is free from gradient vanishing/ exploding problems; (2) robustness against adversarial perturbation (Anil et al., 2019; Li et al., 2019b; Trockman & Kolter, 2021): since each orthogonal layer is 1-Lipschitz, an orthogonal network can not amplify any perturbation to the input to flip the output prediction; (3) better generalizability as proved in (Jia et al., 2019): a network’s generalization error is positively related to the standard deviation of each linear layer’s singular values, thus encouraging orthogonality in the network lowers its generalization error.
Despite the benefits, enforcing orthogonality in convolutional networks is challenging. To avoid the strict constraint, orthogonal initialization (dynamical isometry) (Pennington et al., 2017; Xiao et al., 2018) and orthogonal regularization (Wang et al., 2019; Qi et al., 2020) are adopted to address the gradient vanishing/exploding problems. However, these methods are not suitable for applications that
require strict Lipschitzness, such as adversarial robustness (Anil et al., 2019) and residual flows (Chen et al., 2019), as they do not enforce strict orthogonality (and Lipschitzness) during training.
Our goal is to enforce exact orthogonality in state-of-the-art convolutional networks without expensive computations. We identify three main challenges in doing so. Challenge I: Achieving exact orthogonality throughout the training process. Prior works such as orthogonal regularization (Wang et al., 2019) and reshaped kernel orthogonality (Jia et al., 2017; Cisse et al., 2017), while enjoying algorithmic simplicity, do not meet the requirement of exact orthogonality. Note that enforcing orthogonality during training is necessary as a post-training orthogonalization of the weights can ruin the performance. Challenge II: Avoiding expensive computations. An efficient algorithm is crucial for scalability to large networks. Existing work based on projected gradient descent (Sedghi et al., 2019), however, requires expensive projection after each update. For instance, the projection step in Sedghi et al. (2019) computes an SVD and flattens the spectrum to enforce orthogonality, which costs O(size(feature) · channels3) for a convolutional layer. Challenge III: Scaling-up to state-of-the-art deep convolutional networks. There are many variants to the standard convolutional layer, including dilated, strided, group convolutions, which are essential for state-of-the-art deep convolutional networks. However, none of the existing methods proposes mechanisms to orthogonalize these variants. The lack of techniques, as a result, limits the broad applications of orthogonal convolutional layers to state-of-the-art deeper convolutional networks.
We resolve challenges I, II, and III by proposing complete parameterizations for orthogonal 1Dconvolutions and separable orthogonal 2D-convolutions. First, using the convolution theorem (Oppenheim et al., 1996) (convolution in the spatial domain is equivalent to multiplication in the spectral domain), we reduce the problem of designing orthogonal convolutions to constructing unitary matrices for all frequencies, i.e., a paraunitary system (Vaidyanathan, 1993). Therefore, we obtain a parameterization for all separable orthogonal 2D-convolutions, guaranteeing exact orthogonality and high expressive power. Note that there is no previous approach that achieves exact orthogonality (up to machine precision). For the first time, our versatile framework enables the study in the designs of deep orthogonal networks, such as skip connection, initialization, stride, and dilation. Consequently, we scale orthogonal networks to deep architectures, including ResNet and ShuffleNet, substantially outperforming their shallower counterparts. Our proposed method serves as gaining insight into the trade-off between orthogonality and expressiveness. We observe that exact orthogonality is essential in these deeper architectures with more than 10 layers (detailed discussion of exact orthogonality in Appendix E.3). Finally, we show how to deploy our orthogonal networks in Residual Flow (Chen et al., 2019), a flow-based generative model that requires strict Lipschitzness (Appendix F).
Summary of Contributions:
1. We establish the equivalence between orthogonal convolutions in the spatial domain and paraunitary systems in the spectral domain. Consequently, we can interpret existing approaches for orthogonal convolutions as implicit designs of paraunitary systems. 2. Based on a complete factorization of 1D paraunitary systems, we propose the first exact and complete design for orthogonal 1D-convolutions as well as separable orthogonal 2D-convolutions, ensuring exact orthogonality and high expressive power. None of the existing works achieve a complete parameterization of all orthogonal 2D-convolutions. 3. We prove that orthogonality for various convolutional layers (strided, dilated, group) are also entirely characterized by paraunitary systems. Consequently, our design easily extends to these variants, ensuring both completeness and exactness of the orthogonal convolutions. 4. We study the design considerations for orthogonal networks (choices of skip connection, initialization, depth, width, and kernel size), and show that orthogonal networks can scale to deep architectures including ResNet, WideResNet, and ShuffleNet.
2 ORTHOGONAL CONVOLUTIONS VIA PARAUNITARY SYSTEMS
Designing orthogonal convolutional layer {ht,s : yt = ht,s ∗ xs}T,St=1,s=1 (s, t index input/output channels) in the spatial domain is challenging. When a convolution layer is written as a matrix-vector product, where the matrix is block-circulant and its (t, s)th takes a circulant form Cir (ht,s) as:
Cir (ht,s) = ht,s[1] ht,s[N ] · · · ht,s[2] ht,s[2] ht,s[1] ht,s[N ] · · ·
... . . .
. . . ...
ht,s[N ] · · · ht,s[2] ht,s[1]
∈ RN×N . (2.1)
Therefore, the layer is orthogonal if the block-circulant matrix [Cir (ht,s)] T,S t=1,s=1 is orthogonal. However, it is not obvious how to enforce orthogonality in a block-circulant matrix.
2.1 ACHIEVING ORTHOGONAL CONVOLUTIONS BY PARAUNITARY SYSTEMS
We propose a novel design of orthogonal convolutions from a spectral domain perspective, motivated by the convolution theorem (Theorem 2.1). For simplicity, we group the entries at the same locations into a vector/matrix, e.g., we denote {xs[n]}Ss=1 as x[n] ∈ RS and {ht,s[n]} T,S t=1,s=1 as h[n] ∈ RT×S . Theorem 2.1 (Convolution theorem (Oppenheim et al., 1996)). For a standard convolution layer h: y[i] = ∑ n h[n]x[i − n], the convolution in the spatial domain is equivalent to a matrixvector multiplication in the spectral domain, i.e., Y (z) = H(z)X(z),∀z ∈ C. Here, X(z) =∑N−1 n=0 x[n]z −n, Y (z) = ∑N−1 n=0 y[n]z −n, H(z) = ∑L n=−L h[n]z
−n denote the z-transforms of input, output, kernel respectively, where N is the length of x,y and [−L,L] is the span of the filter h.
The convolution theorem states that a standard convolution layer is a matrix-vector multiplication in the spectral domain. As long as the transfer matrix H(z) is unitary at z = ejω for all frequencies ∀ω ∈ R (j is the imaginary unit), the convolutional layer h is orthogonal. Therefore, as a major novelty of our paper, we design orthogonal convolutions through designing unitary transfer matrix H(ejω) at all frequencies ω ∈ R, which is known as a paraunitary systems (Vaidyanathan, 1993; Strang & Nguyen, 1996). We prove in Theorem B.5 that a convolutional layer is orthogonal in the spatial domain if and only if it is paraunitary in the spectral domain.
Benefits through paraunitary systems. (1) The spectral representation simplifies the designs of orthogonal convolutions and avoids the analysis of block-circulant structure. (2) Since paraunitary systems are necessary and sufficient for orthogonal convolutions, it is impossible to find an orthogonal convolution whose transfer matrix is not paraunitary. (3) There exists a complete factorization of paraunitary systems: any paraunitaryH(z) can be realized through multiplications in the spectral domain, as will show in Equation (2.2a). (4) Since multiplications in spectral domain correspond to convolutions in spatial domain, any orthogonal convolution can be realized as cascaded convolutions of multiple sub-layers, each parameterized by an orthogonal matrix. (5) There are mature methods that represent orthogonal matrices via unconstrained parameters. Consequently, we can learn orthogonal convolutions using standard optimizers on a model parameterized via our design.
Interpretation of existing methods. Since paraunitary system is a necessary and sufficient condition for orthogonal convolution, all existing approaches, including singular value clipping and masking (SVCM) (Sedghi et al., 2019), block convolution orthogonal parameterization (BCOP) (Li et al., 2019b), Cayley Convolution (CayleyConv) (Trockman & Kolter, 2021), skew orthogonal convolution (SOC) design paraunitary systems implicitly. We discuss these interpretations in Appendix C.3.3, and show that our SC-Fac has the lowest computational complexity among all approaches.
2.2 REALIZING PARAUNITARY SYSTEMS VIA RE-PARAMETERIZATION
After reducing the problem of orthogonal convolutions to paraunitary systems, we are left with the question of how to realize paraunitary systems. We use a complete factorization form of paraunitary systems to realize any paraunitary systems. According to Theorem C.7, any paraunitary systemH(z) can be written as the form in Equation (2.2a) and anyH(z) in this form is a paraunitary system:
H(z) = V (z;U (−L)) · · ·V (z;U (−1))QV (z−1;U (1)) · · ·V (z−1;U (L)), (2.2a)
where V (z;U (`)) = (I −U (`)U (`) > ) +U (`)U (`) > z, ∀` ∈ {−L, · · · ,−1} ∪ {1, · · · , L}. (2.2b)
Here Q is an orthogonal matrix and each U (`) is a column-orthogonal matrix whose number of columns is sampled uniformly from {1, · · · , T}. As multiplications in the spectral domain are equivalent to convolutions in the spatial domain, this complete spectral factorization of paraunitary systems in Equation (2.2a) allows us to parameterize any orthogonal convolution in the spatial domain as cascaded convolutions of V (z;U (`))’s spatial counterparts and the orthogonal matrixQ.
Model design in the spatial domain. We obtain a complete design of orthogonal 1D-convolutions: using learnable (column)-orthogonal matrices ({U (`)}−1`=−L,Q, {U (`)}L`=1), we parameterize a size (L+ L+ 1) convolution as cascaded convolutions of the following filters in the spatial domain{[
I −U (`)U (`) > , U (`)U (`) >]}−1 `=−L , Q, {[ U (`)U (`) > , I −U (`)U (`) >]}L `=1 . (2.3)
Figure 1 provides a visualization of our proposed design of orthogonal convolution layers; each block denotes a convolution and the form of the filter is displayed in each of the block. In practice, we compose all (L+ L+ 1) filters into one for orthogonal convolution, which not only increases the computational parallelism but also avoids storing intermediate outputs between filters.
Using this 1D-convolution, we obtain a complete design for separable orthogonal 2D-convolutions, which can be represented as a convolution of two orthogonal 1D-convolutional layers. With separability, we obtain a design of orthogonal 2D-convolutions by composing two complete designs of orthogonal 1D-convolutions. As a result, we can parameterize separable orthogonal 2D-convolution with filter size (L1+L1+1)×(L2+L2+1) as a convolution of two orthogonal 1D-convolutions with learnable (column-)orthogonal matrices ({U (`)1 } −1 `=−L1 ,Q1, {U (`)1 } L1 `=1) and ({U (`) 2 } −1 `=−L2 ,Q2, {U (`)2 } L2 `=1). With a complete factorization of paraunitary systems (1D and separable 2D), we reduce the problem of designing orthogonal convolutions to the one for orthogonal matrices.
Parameterization for orthogonal matrices. In Appendix C.4, we perform a comparative study on different parameterizations of orthogonal matrices, including the Björck orthogonalization (Anil et al., 2019; Li et al., 2019b), the Cayley transform (Helfrich et al., 2018; Maduranga et al., 2019), and the exponential map (Lezcano-Casado & Martínez-Rubio, 2019). In our implementation, we adopt a modified exponential map due to its efficiency, exactness, and completeness. The exponential map is a surjective mapping from a skew-symmetry matrixA to a special orthogonal matrix U (i.e., det(U) = 1) with U = exp(A) = I +A+A2/2 + · · · , where the infinite series is computed up to machine-precision (Higham, 2009). To parameterize all orthogonal matrices, we introduce an additional orthogonal matrixQ in U = Q exp(A), whereQ is (randomly) generated at initialization and fixed during training (Lezcano Casado, 2019) .
Now we have an end-to-end pipeline for orthogonal convolutions as shown in Figure 2 (See Algorithm 1 in Appendix E for a pseudo implementation). Since our method relies on separability and complete factorization for 1D paraunitary systems, we call it Separable Complete Factorization (SC-Fac).
3 UNIFYING ORTHOGONAL CONVOLUTIONS VARIANTS
Various convolutional layers (strided, dilated, and group convolution) are widely used in neural networks. However, it is not apparent how to enforce their orthogonality, as the convolution theorem (Theorem 2.1) only holds for standard convolutions. Previous approaches only deal with standard convolutions (Sedghi et al., 2019; Li et al., 2019b; Trockman & Kolter, 2021), thus orthogonality for state-of-the-art architectures has never been studied before.
We address this limitation by modifying convolution theorem for each variant of convolution layer, which allows us to design these variants using paraunitary systems. Theorem 3.1 (Convolution and paraunitary theorems for various convolutions). Strided, dilated, and group convolutions can be unified in the spectral domain as Y (z) =H(z)X(z), where Y (z), H(z),X(z) are modified Z-transforms of y, h, x. We instantiateH(z) for strided convolutions in Proposition C.4, dilated convolution in Proposition C.5, and group convolution in Proposition C.6. Furthermore, a convolution is orthogonal if and only ifH(z) is paraunitary.
In Table 1, we formulate strided, dilated, and group convolutions in the spatial domain, interpreting them as up-sampled or down-sampled variants of a standard convolution. Now, we introduce the concept of up-sampling and down-sampling precisely below.
Given a sequence x, we introduce its up-sampled sequence x↑R with sampling rate R as x↑R[n] , x[n/R] for n ≡ 0 (mod R). On the other hand, its (r,R)-polyphase component xr|R indicates the r-th down-sampled sequence with sampling rate R, defined
as xr|R[n] , x[nR + r]. We illustrated an example of x↑R and xr|R in Figure 3 when sampling rate R = 2. The Z-transforms of x↑R, xr|R are denoted as X↑R(z), Xr|R(z) respectively. Their relations toX(z) are studied in Appendix C.1.
Now we are ready to interpret convolution variants. (1) Strided convolution is used to adjust the feature resolution: a strided convolution (↓R-strided) decreases the resolution by down-sampling after a standard convolution, while a transposed strided convolutional layer (↑R-strided) increases the resolution by up-sampling before a standard convolution. (2) Dilated convolution increases the receptive field of a convolution without extra parameters: an R-dilated convolution up-samples its filters before convolution with the input. (3) Group convolution reduces the parameters and computations, thus widely used by efficient architectures: a G-group convolution divides the input/output channels into G groups and restricts the connections within each group. In Appendix C.2, we prove that a convolution is orthogonal if and only if its modified Z-transformH(z) is paraunitary.
4 LEARNING DEEP ORTHOGONAL NETWORKS WITH LIPSCHITZ BOUNDS
In this section, we switch our focus from layer design to network design. In particular, we aim to study how to scale-up deep orthogonal networks with Lipschitz bounds.
Lipschitz networks (Anil et al., 2019; Li et al., 2019b; Trockman & Kolter, 2021), whose Lipschitz bounds are imposed by their architectures, are proposed as competitive candidates to guarantee robustness in deep learning. A Lipschitz network consists of orthogonal layers and GroupSort activations (Anil et al., 2019) — both are 1-Lipschitz and gradient norm preserving. See Appendix D for more discussions on the properties of GroupSort and Lipschitz networks. Given a Lipschitz constant L, a Lipschitz network f can compute a certified radius for each input from its output margin. Formally, denote the output margin of an input x with label c as
Mf (x) , max(0, f(x)c −max i6=c f(x)i), (4.1)
i.e., the difference between the correct logit and the second largest logit. Then the output is robust to perturbation such that f(x+ ) = f(x) = c,∀ : ‖ ‖ <Mf (x)/ √ 2L.
Despite the benefit, existing architectures for Lipschitz networks remain simple and shallow, and a Lipschitz network is typically an interleaving cascade of orthogonal layers and GroupSort activations (Li et al., 2019b). More advanced architectures, such as ResNet and ShuffleNet, are still out of reach. While orthogonal layers supposedly substitute the role of batch normalization (Pennington et al., 2017; Xiao et al., 2018; Qi et al., 2020), other critical factors, including skip-connections (He et al., 2016a;b) and proper initialization (Glorot & Bengio, 2010) are lacking. In this section, we explore skip-connections and initialization methods toward addressing this problem.
Skip-connections. Two general types of skip-connections are widely used in deep networks, one based on addition and another on concatenation. The addition-based connection is proposed in ResNet (He et al., 2016a), and adopted in SE-Net (Hu et al., 2018) and EffcientNet (Tan & Le, 2019). The concatenation-based connection is proposed in flow-based generative models (Dinh et al., 2014; 2016; Kingma & Dhariwal, 2018), and adopted in DenseNet (Huang et al., 2017) and ShuffleNet (Zhang et al., 2018b; Ma et al., 2018). In what follows, we propose Lipschitz skip-connections with these two mechanisms, illustrated in Figure 6 (in Appendix D). Proposition 4.1 (Lipschitzness of residual blocks). Suppose f1, f2 are L-Lipschitz (for residual/shortcut branches) and α ∈ [0, 1] is a learnable scalar, then an additive residual block f : f(x) , αf1(x)+(1−α)f2(x) is L-Lipschitz. Alternatively, suppose g1, g2 are L-Lipschitz and P is a channel permutation, then a concatenative residual block g : g(x) , P [ g1(x1); g2(x2) ] is LLipschitz, where [·; ·] denotes channel concatenation, and x is split into x1 and x2, i.e., x = [x1,x2].
Initialization. Proper initialization is crucial in training deep networks (Glorot & Bengio, 2010; He et al., 2016a). Various methods are proposed to initialize orthogonal matrices, including the identical/permutation and torus initialization in the context of orthogonal RNNs (Henaff et al., 2016; Helfrich et al., 2018; Lezcano-Casado & Martínez-Rubio, 2019). However, initialization of orthogonal convolutions was not systematically studied, and all previous approaches inherit the initialization from the underlying parameterization (Li et al., 2019b; Trockman & Kolter, 2021). In Proposition D.1, we study the condition when a paraunitary system (represented in Equation (2.2a)) reduces to an orthogonal matrix. This reduction allows us to apply the initialization methods for orthogonal matrices (e.g., uniform, torus) to orthogonal convolutions.
In the experiments, we will evaluate the impact of different choices of skip-connections and initialization methods to the performance of deep Lipschitz networks.
5 RELATED WORK
Dynamical isometry (Pennington et al., 2017; 2018; Chen et al., 2018; Pennington et al., 2018) aims to address the gradient vanishing/exploding problems in deep vanilla networks with orthogonal initialization. These works focus on understanding the interplay between initialization and various nonlinear activations. However, these approaches do not guarantee Lipschitzness after training, thus are unsuitable for applications that require strict characterization of Lipschitz constants, such as adversarial robustness (Anil et al., 2019) and residual flows (Behrmann et al., 2019).
Learning orthogonality has three typical approaches: regularization, parameterization (representing the feasible set with unconstrained parameters), and projected gradient descent (PGD) / Riemannian gradient descent (RGD). While regularization is approximate, the latter two learn exact orthogonality. (1) For orthogonal matrices, various regularizations are proposed in Xie et al. (2017); Bansal et al. (2018). Alternatively, numerous parameterizations exist, including Householder reflections (Mhammedi et al., 2017), Given rotations (Dorobantu et al., 2016), Cayley transform (Helfrich et al., 2018), matrix exponential (Lezcano-Casado & Martínez-Rubio, 2019), and algorithmic unrolling (Anil et al., 2019; Huang et al., 2020). Lastly, Jia et al. (2017) propose PGD via singular value clipping, and Vorontsov et al. (2017); Li et al. (2019a) consider RGD. (2) For orthogonal convolutions, some existing works learn orthogonality for the flattened matrix (Jia et al., 2017; Cisse et al., 2017; Bansal et al., 2018) or each output channel (Liu et al., 2021). However, these methods do not lead to orthogonality (norm preserving) of the operation. Sedghi et al. (2019) propose to use PGD via singular value clipping and masking, which is expensive and can result in inexact orthogonality. Recent works adopt parameterizations, using block convolutions (Li et al., 2019b), Cayley transform (Trockman & Kolter, 2021), or convolution exponential (Singla & Feizi, 2021). Note that network deconvolution (Ye et al., 2020) aims to whiten the activations (i.e., orthogonalize their distribution), but the added whitening operations do not necessarily preserve orthogonality.
Paraunitary systems are extensively studied in filter banks and wavelets (Vaidyanathan, 1993; Strang & Nguyen, 1996; Lin & Vaidyanathan, 1996). Classic theory shows that 1D-paraunitary systems are fully characterized by a spectral factorization (see Chapter 14 of Vaidyanathan (1993)), but not all MD-paraunitary systems admit a factorized form (see Chapter 8 of Lin & Vaidyanathan (1996)). While the complete characterization of MD-paraunitary systems is known in theory (which incurs a difficult problem of solving a system of nonlinear equations) (Venkataraman & Levy, 1995; Zhou, 2005), most practical constructions use separable paraunitary systems (Lin & Vaidyanathan, 1996) and special classes of non-separable paraunitary systems (Hurley & Hurley, 2012). The equivalence between orthogonal convolutions and paraunitary systems thus opens the opportunities to apply these classic theories in designing orthogonal convolutions.
6 EXPERIMENTS
In the experiments, we achieve the following goals. (1) We demonstrate in Section 6.1 that our separable complete factorization (SC-Fac) achieves precise orthogonality (up to machine-precision), resulting in more accurate orthogonal designs than previous ones (Sedghi et al., 2019; Li et al., 2019b; Trockman & Kolter, 2021). (2) Despite the differences in preciseness, we show in Section 6.2 that different realizations of paraunitary systems only have a minor impact on the adversarial robustness of Lipschitz networks. (3) Due to the versatility of our convolutional layers and architectures, in Section 6.3, we explore the best strategy to scale Lipschitz networks to wider/deeper architectures. (4) In Appendix F, we further demonstrate in a successful application of orthogonal convolutions in residual flows (Chen et al., 2019). Training details are provided in Appendix E.1.
6.1 EXACT ORTHOGONALITY
We evaluate the orthogonality of our SC-Fac layer verse previous approaches, including CayleyConv (Trockman & Kolter, 2021), BCOP (Li et al., 2019b), SVCM (Sedghi et al., 2019), RKO (Cisse et al., 2017), OSSN (Miyato et al., 2018). Our experiments are based on a convolutional layer with 64 input channels and 16× 16 input size. We orthogonalize the layer using each approach, and evaluate it with Gaussian inputs. For our SC-Fac layer, We initialize all orthogonal matrices uniformly, while we use built-in initialization for others. We evaluate the difference between 1 and the ratio of the output norm to the input norm — a layer is exactly orthogonal if the number is close to 0.
(1) Standard convolution. We show in Table 2 (Left) that our SC-Fac is orders of magnitude more precise than all other approaches. The SC-Fac layer is in fact exactly orthogonal up to machine epsilon, which is 2−24 ≈ 5.96 × 10−8 for 32-bits floats. While RKO and OSSN are known not to be orthogonal, we surprisingly find that SVCM is far from orthogonal due to its masking step. (2) Convolutions variants. In Section 3, we show that various orthogonal convolutions can be constructed using paraunitary systems. We verify our theory in Table 2 (Right): our SC-Fac layers are exactly orthogonal (up to machine precision) for various convolutions.
6.2 ADVERSARIAL ROBUSTNESS
In this subsection, we evaluate the adversarial robustness of Lipschitz networks. Following the setup in (Trockman & Kolter, 2021), we adopt KW-Large, ResNet9, WideResNet10-10 as the backbone architectures, and evaluate their robust accuracy on CIFAR-10 with different designs of orthogonal
convolutions. We extensively perform a hyper-parameter search and choose the best hyper-parameters for each approach based on the robust accuracy. The details of the hyper-parameter search is in Appendix E. We run each model with 5 different seeds and report the best accuracy.
(1) Certified robustness. Following Li et al. (2019b), we use the raw images (without normalization) for network input to achieve the best certified accuracy. As shown in Table 3 (Top), different realizations of paraunitary systems, SC-Fac, CayleyConv and BCOP achieve comparable performance — CayleyConv performs < 1% better in clean accuracy, but the difference in robust accuracy are negligible. (2) Practical robustness. Trockman & Kolter (2021) shows that the certified accuracy is too conservative, and it is possible to increase the practical robustness (against PGD attacks) with a standard input normalization. Notice that the normalization increases the Lipschitz bound, thus lower the certified accuracy. Our experiments in Table 3 (Bottom) are based on ResNet9, WideResNet1010 (Trockman & Kolter, 2021) and a deeper WideResNet22. For the shallow architectures (ResNet9, WideResNet10-10), our SC-Fac, CayleyConv, and BCOP again achieve comparable performance — CayleyConv is slightly ahead in robust accuracy. For the deeper architecture, our SC-Fac has a clear advantage in both clean and robustness accuracy, and the clean accuracy to only 5% lower than a traditional ResNet 32 trained with batch normalization. Surprisingly, we find that RKO also performs well in robust accuracy while not exactly orthogonal. In summary, our experiments show that various paraunitary realizations provide different impacts on certified and practical robustness. While exact orthogonality provides tight Lipschitz bound, there is a trade-off between the exact orthogonality and the practical robustness (especially with the shallow architectures).
6.3 SCALING-UP DEEP ORTHOGONAL NETWORKS WITH LIPSCHITZ BOUNDS
All previously proposed Lipschitz networks (Li et al., 2019b; Trockman & Kolter, 2021) consider only shallow architectures (≤ 10 layers). In this part, we investigate various factors to scale Lipschitz networks to deeper architectures: skip-connection, depth-width, receptive field, and down-sampling.
(1) Skip-connections. Conventional wisdom suggests that skip-connections mainly address the gradient vanishing/exploding problem; thus, they are not needed for orthogonal networks. To understand their role, we perform an experiment that trains deep Lipschitz networks without skipconnection and with skip-connections based on addition/concatenation (see Section 4). As shown in Table 4 (left), the network with additive skip-connection substantially outperforms the other two, and the one without skip-connections performs the worst. Therefore, we empirically show that (additive) skip-connection is crucial in deep Lipschitz networks. (2) Depth and width. Exact orthogonality is criticized for harming the expressive power of neural networks. We show that the decrease of expressive power can be alleviated by increasing the network depth/width. In Table 3 (Bottom) and Table 7 (Appendix E), we observe that deeper/wider architectures increase both the clean and robust accuracy. (3) Initialization methods. We try different initialization methods, including identical, permutation, uniform, and torus (Henaff et al., 2016; Helfrich et al., 2018). We find that identical initialization works the best for deep Lipschitz networks (> 10 layers), while all methods perform similarly in shallow networks as shown in Table 6 (Appendix E). (4) Receptive field and down-sampling. Previous works (Li et al., 2019b; Trockman & Kolter, 2021) use larger kernel
size and no stride for Lipschitz networks. In Table 4 (Right), we perform a study on the effects of kernel/dilation size and down-sampling types for the orthogonal convolutions. We find that an average pooling as down-sampling consistently outperforms strided convolutions. Furthermore, a larger kernel size helps to boost the performance. (5) Run-time and memory comparison. We find that previously proposed orthogonal convolutions such as CayleyConv, BCOP, and RKO require more GPU memory and computation time than SC-Fac. Therefore, we could not to scale them due to memory constraints (for 22 and 32 layers using Tesla V100 32G). In order to scale up Lipschitz networks, economical implementation of orthogonal convolution is crucial. As shown in Figure 4, for deep and wide architectures, our SC-Fac is the most computationally and memory efficient method and the only method that scales to a width increase of 10 on WideResNet22. Missing numbers in Figure 4 and Table 7 (Appendix E) are due to the large memory requirement.
In summary, additive skip-connections are still essential for learning deep orthogonal networks. Due to the orthogonal constraints, it is helpful to increase the depth/width of the network. However, this significantly increases the memory requirement; thus, a cheap implementation (like SC-Fac) is desirable. Finally, we find that a larger kernel size and down-sampling based on average pooling is helpful, unlike standard practices in deep networks.
7 CONCLUSION
In this paper, we present a paraunitary framework for orthogonal convolutions. Specifically, we establish the equivalence between orthogonal convolutions in the spatial domain and paraunitary systems in the spectral domain. Therefore, any design for orthogonal convolutions is implicitly constructing paraunitary systems. We further show that the orthogonality for variants of convolution (strided, dilated, and group convolutions) is also fully characterized by paraunitary systems. In summary, paraunitary systems are all we need to ensure orthogonality for diverse types of convolutions.
Based on the complete factorization of 1D paraunitary systems, we develop the first exact and complete design of separable orthogonal 2D-convolutions. Our versatile design allows us to study the design principles for orthogonal convolutional networks. Consequently, we scale orthogonal networks to deeper architectures, substantially outperforming their shallower counterparts. In our experiments, we observe that exact orthogonality plays a crucial role in learning deep Lipschitz networks. In the future, we plan to investigate other use cases that exact orthogonality is essential.
ETHICS STATEMENT
Our work lies in the foundational research on neural information processing. Specifically, we establish the equivalence between orthogonal convolutions in neural networks and paraunitary systems in signal processing. Furthermore, our presented orthogonal convolutional layers are plug-and-play modules that can replace various convolutional layers in neural networks. Consequently, our modules are applicable in Lipschitz networks for adversarial robustness, recurrent networks for learning long-term dependency, or flow-based networks for effortless reversibility.
The vulnerability of neural networks raises concerns about their deployment in security-sensitive scenarios, such as healthcare systems or self-driving cars. In our experiment, we demonstrate a successful application of orthogonal convolutions in learning robust networks. Furthermore, these networks achieve higher robust accuracy without additional techniques such as adversarial training or randomized smoothing. Therefore, our research contributes to the robust learning of neural networks and potentially leads to their broader deployment.
Furthermore, we reduce the inference cost of our layer to be the same as a traditional convolution layer, significantly lowering the expense for deployment compared with other methods for orthogonal networks. However, our layers are memory and computationally more expensive during training than traditional layers. The overhead to the already expensive training cost exacerbates the concerns on the efficacy of learning neural networks. Therefore, balancing between robustness and efficiency is an important research topic that requires more research in the future. We develop more efficient implementation than previous approaches, narrowing the gap between these two conflicting goals.
REPRODUCIBILITY STATEMENT
We provide detailed proofs to our theoretical claims in Appendices B to D. We describe experimental setups and training details of our models in Appendix E. We include our code for experiments in the supplementary material.
Appendices: Scaling-up Diverse Orthogonal Convolutional Networks by a Paraunitary Framework
Notations. We use non-bold letters for scalar (e.g., x) and bold ones for vectors or matrices (e.g., x). We denote sequences in the spatial domain using lower-case letters (e.g., x[n], x[n]) and their spectral representations using upper-case letters (e.g., X(z),X(z)). For a positive integer, say R ∈ Z+, we abbreviate the set {0, 1, · · · , R− 1} as [R], and whenever possible, we use its lower-case letter, say r ∈ [R], as the corresponding iterator.
Assumptions. For simplicity, we assume all sequences are L2 with range Z = {0,±1,±2, · · · } (a sequence x = {x[n], n ∈ Z} is L2 if ∑ n∈Z ‖x[n]‖2 < ∞). Such assumption is common in the literature, which avoids boundary conditions in signal analysis. To deal with periodic sequences (finite sequences with circular padding), one can either adopt the Dirac function in the spectral domain or use discrete Fourier transform to compute the spectral representations. In our implementation, we address the boundary condition case by case for each convolution type, with which we achieve exact orthogonality in the experiments (Section 6.1).
A PSEUDO CODE FOR OUR SC-FAC ALGORITHM
We include the pseudo-code for separable complete factorization (Section 2) in Algorithm 1 and diverse orthogonal convolutions (Section 3) in Algorithm 2. The pseudo-code in Algorithm 1 consists of three parts: (1) First, we obtain orthogonal matrices from skew-symmetric matrices using matrix exponential. We use GeoTorch library (Lezcano Casado, 2019) for the function matrix_exp in our implementation; (2) Subsequently, we construct two 1D paraunitary systems using these orthogonal matrices; (3) Lastly, we compose two 1D paraunitary systems to obtain one 2D paraunitary systems The pseudo-code in Algorithm 2 consists of two parts: (1) First, we reshape each paraunitary system into an orthogonal convolution depending on the stride; and (5) second, we concatenate the orthogonal kernels for different groups and return the output.
B ORTHOGONAL CONVOLUTIONS VIA PARAUNITARY SYSTEMS
In this section, we prove the convolution theorem and Parseval’s theorem for standard convolutional layers. Subsequently, we prove the paraunitary theorem which establishes the equivalence between orthogonal convolutional layers and paraunitary systems.
B.1 SPECTRAL ANALYSIS OF STANDARD CONVOLUTION LAYERS
Standard convolutional layers are the default building blocks for convolutional neural networks. One such layer consists of a filter bank with T × S filters h = {hts[n], n ∈ Z}t∈[T ],s∈[S], where S, T are the number of input and output channels respectively. The layer maps an S-channel input x = {xs[i], i ∈ Z}s∈[S] to a T -channel output y = {yt[i], i ∈ Z}t∈[T ] according to
yt[i] = ∑ s∈[S] ∑ n∈Z hts[n]xs[i− n], (B.1)
where i indexes the output location to be computed, and n indexes the filter coefficients. Alternatively, we can rewrite Equation (B.1) in matrix-vector form as
y[i] = ∑ n∈Z h[n]x[i− n], (B.2)
where each h[n] ∈ RT×S is a matrix, and each x[i− n] ∈ RS or y[i] ∈ RT is a vector. Notice that in Equation (B.2), we group entries from all channels into a vector or matrix (e.g., from {x0[n]}s∈[S] to x[n]), different from a common notation that groups entries from all locations into a vector or matrix (e.g., from {xs[n], n ∈ Z} into xs). In the matrix-vector form, a standard convolutional layer computes a vector sequence y = {y[i] ∈ RT , i ∈ Z} with a convolution between a matrix sequence h = {h[n] ∈ RT×S , n ∈ Z} and a vector sequence x = {x[i] ∈ RS , i ∈ Z}.
Algorithm 1: Separable Complete Factorization (SC-Fac) Input: Number of channels C, kernel size K = 2L+ 1, and
Skew-symmetric matrices {A(`)d } withA (`) d ∈ RC×C ,∀` ∈ [−L,L], d ∈ {1, 2}.
Output: A paraunitary systemH ∈ RC×C×K×K . Initialization: Sample N (`)d from {1, · · · , C} uniformly ∀` ∈ [−L,L], d ∈ {1, 2} /* Iterate for vertical/horizontal dimensions */ for d = 1 to 2 do
/* 1) Compute orthogonal matrices from skew-symmetric matrices */ /* Iterate for filter locations */ for ` = −L to L do
if ` = 0 then Qd ← matrix_exp(A(0)d ) // use matrix_exp() in GeoTorch (Lezcano Casado, 2019) else
U (`) d ← select(matrix_exp(A (`) d ), cols = N (`) d ) // selects the first cols
columns of the matrix end if
end for /* 2) Compose 1D paraunitary systems from orthogonal matrices */ Hd ← Qd for ` = 1 to L do
Hd ← conv1d(Hd, [ U (`) d U (`) d > , I −U (`)d U (`) d > ] )
Hd ← conv1d( [ I −U (−`)d U (−`) d > , U (−`) d U (−`) d > ] ,Hd)
end for end for /* 3) Compose a 2D paraunitary systems from two 1D paraunitary */ H← Compose(H1,H2) // i.e., H:,:,i,j = (H2):,:,j(H1):,:,i where the 1D
paraunitary systems H1 and H2 are of size C × C ×K returnH
Algorithm 2: Construct Diverse Orthogonal Convolutions from Paraunitary Systems Input: Number of base channels C, kernel size K = R(2L+ 1), stride R, dilation D, number of groups G Output: An orthogonal kernelW ∈ RT×S×K×K Set K ′ ← K/R, number of input channels S ← GC/R2 and output channels T ← GC for g = 0 to G− 1 do
/* 1) Construct orthogonal convolutions from paraunitary systems */ Initialize skew-symmetric matrices {{A(`,g)d }L`=−L}2d=1 for the current g Hg ← Algorithm 1: SC-Fac(C,K ′, {{A(`,g)d }L`=−L}2d=1) Hg ← reshape(Hg, (C,C,K ′,K ′)→ (C/R2, C,K,K))
end for /* 2) Concatenate orthogonal convolutions from different groups */ W ← concatenate({Hg}G−1g=0 , dim = 0) returnW (where the filter for input channel s and output channel t isWt,s,:,: ∈ RK×K)
Let us first define the Z-transform and various types of Fourier transform in Definition B.1 before proving the convolution theorem (Theorem 2.1).
Definition B.1 (Z-transform and Fourier transforms). For a sequence (of scalars, vectors, or matrices) x = {x[n], n ∈ Z}, its Z-transformX(z) is defined as
X(z) = ∑ n∈Z x[n]z−n, (B.3)
where z ∈ C is a complex number such that the infinite sum is convergent. If z is restricted to the unit circle z = ejω (i.e., |z| = 1), the z-transformX(z) reduces to a discrete-time Fourier transform (DTFT) X(ejω). If ω is further restricted to a finite set ω ∈ {2πk/N, k ∈ [N ]}, the DTFT X(ejω) reduces to an N -points discrete Fourier transform (DFT)X(ej2πk/N ).
The celebrated convolution theorem states that the convolution in spatial domain (Equation (B.2)) is equivalent to a multiplication in the spectral domain, i.e., Y (z) =H(z)X(z),∀z ∈ C.
Proof of Theorem 2.1. The proof follows directly from the definitions of standard convolution (Equation (B.2)) and Z-transform (Equation (B.3)).
Y (z) = ∑ i∈Z y[i]z−i (B.4)
= ∑ i∈Z (∑ n∈Z h[n]x[i− n] ) z−i (B.5)
= ∑ n∈Z h[n]z−n (∑ i∈Z x[i− n]z−(i−n) )
(B.6)
= (∑ n∈Z h[n]z−n )(∑ k∈Z x[k]z−k ) (B.7)
=H(z)X(z), (B.8)
where Equations (B.4) and (B.8) use the definition of Z-transform, Equation (B.5) uses the definition of convolution, and Equation (B.7) makes a change of variable k = i− n.
Next, we introduce the concepts of inner product and Frobenius norm for sequences. We then prove Parseval’s theorem, which allows us to compute the sequence norm in the spectral domain.
Definition B.2 (Inner product and norm for sequences). Given two sequences x = {x[n], n ∈ Z} and y = {y[n], n ∈ Z} with x[n],y[n] having the same dimension for all n, the inner product of these two sequences is defined as
〈x,y〉 , ∑ n∈Z 〈x[n],y[n]〉 (B.9)
where 〈x[n],y[n]〉 denotes the Frobenius inner product between x[n] and y[n]. Subsequently, we can define the Frobenius norm of a sequence using inner product as
‖x‖ , √ 〈x,x〉 (B.10)
Theorem B.3 (Parsavel’s theorem). Given a sequence x = {x[n], n ∈ Z}, its sequence norm ‖x‖ can be computed byX(ejω) in the spectral domain as
‖x‖2 = ∑ n∈Z ‖x[n]‖2 = 1 2π ∫ π −π ‖X(ejω)‖2dω, (B.11)
where ‖X(ejω)‖2 =X(ejω)†X(ejω) is an inner product between two complex arrays.
Proof of Theorem B.3. The theorem follows from the definitions of convolution and discrete-time Fourier transform (DTFT).
1
2π ∫ π −π ∥∥X(ejω)∥∥2 dω = 1 2π ∫ π −π 〈 X(ejω),X(ejω) 〉 dω (B.12)
= 1
2π ∫ π −π 〈∑ n∈Z x[n]e−jωn, ∑ m∈Z x[m]e−jωm 〉 dω (B.13)
= ∑ n∈Z ∑ m∈Z 〈x[n],x[m]〉 1 2π ∫ π −π e−jω(m−n)dω (B.14)
= ∑ n∈Z ∑ m∈Z 〈x[n],x[m]〉1m=n (B.15)
= ∑ n∈Z 〈x[n],x[n]〉 = ∑ n∈Z ‖x[n]‖2 , (B.16)
where Equation (B.14) is due to the bi-linearity of inner products, and Equation (B.15) makes uses of the fact that ∫ π −π e −jωkdω = 0 for k 6= 0 and ∫ π −π e −jωkdω = ∫ π −π dω = 2π for k = 0.
B.2 EQUIVALENCE BETWEEN ORTHOGONAL CONVOLUTIONS AND PARAUNITARY SYSTEMS
With the sequence norm introduced earlier, we formally define orthogonality for convolutional layers. Definition B.4 (Orthogonal convolutional layer). A convolution layer is orthogonal if the input norm ‖x‖ is equal to the output norm ‖y‖ for arbitrary input x, that is
‖y‖ , √∑ n∈Z ‖y[n]‖2 = √∑ n∈Z ‖x[n]‖2 , ‖x‖, (B.17)
where ‖x‖ (or ‖y‖) is defined as the squared root of ∑ n∈Z ‖x[n]‖2 (or ∑ n∈Z ‖y[n]‖2).
This definition of orthogonality not only applies to standard convolutions in Equation (B.2) but also variants of convolutions in Appendix C.3. In this section, however, we first establish the equivalence between orthogonality for standard convolutions and paraunitary systems. Theorem B.5 (Paraunitary theorem). A standard convolutional layer (in Equation (B.2)) is orthogonal (by Definition B.4) if and only if its transfer matrixH(z) is paraunitary, i.e.,
H(z) † H(z) = I, ∀|z| = 1⇐⇒H(ejω)†H(ejω) = I, ∀ω ∈ R. (B.18)
In other words, the transfer matrixH(ejω) is unitary for all frequencies ω ∈ R.
Proof of Theorem B.5. We first prove that a convolutional layer is orthogonal if its transfer matrix H(z) is paraunitary (i.e.,H(ejω) is unitary for any frequency ω ∈ R).
‖y‖2 = 1 2π ∫ π −π ∥∥Y (ejω)∥∥2 dω (B.19) = 1
2π ∫ π −π ∥∥H(ejω)X(ejω)∥∥2 dω (B.20) = 1
2π ∫ π −π ∥∥X(ejω)∥∥2 dω (B.21) = ‖x‖2 (B.22)
where Equations (B.19) and (B.22) are due to Parseval’s theorem (Theorem B.3), Equations (B.19) and (B.20) follows from the convolution theorem (Theorem 2.1), and Equation (B.21) utilizes that H(ejω) is unitary for any frequency ω ∈ R (thus ‖H(ejω)X(ejω)‖ = ‖X(ejω)‖ for anyX(ejω)). The ‘only if’ part also holds in practice. We here prove by contradiction using periodic inputs (e.g., finite inputs with circular padding). Suppose there exists a frequency ω and H(ejω) is not unitary.
SinceH(ejω) is continuous (due to h ∈ L2 and dominated convergence theorem), there exist integers N , k, such that ω ≈ 2kπ/N andH(ej2πk/N ) is also not unitary. As a result, there exists a complex vector u such that v =H(ej2πk/N )u while ‖v‖ 6= ‖u‖. Therefore, we can construct two periodic sequences x = {x[n], n ∈ [N ]} and y = {y[n], n ∈ [N ]} such that
x[n] = uej2πnk/N =⇒ y[n] = vej2πnk/N . (B.23)
Now the input norm ‖x‖ = √∑
n∈[N ] ‖x[n]‖2 = √ N‖u‖ is not equal to the output norm ‖y‖ =√∑
n∈[N ] ‖y[n]‖2 = √ N‖v‖, i.e., the layer is not orthogonal, which leads to a contradiction.
C A PARAUNITARY FRAMEWORK FOR ORTHOGONAL CONVOLUTIONS
Spectral
C.1 MULTI-RESOLUTION ANALYSIS
Multi-resolution operations are essential in various convolutional layers, in particular strided and dilated convolutions. In order to define and analyze these convolutions rigorously, we first review the concepts of up-sampling, down-sampling, and polyphase components.
(1) Up-sampling. Given a sequence (of scalars, vectors, matrices) x = {x[n], n ∈ Z}, its upsampled sequence x↑R = {x↑R[n], n ∈ Z} is defined as
x↑R[n] , { x[n/R] n ≡ 0 (mod R) 0 otherwise , (C.1)
where R ∈ Z+ is the up-sampling rate. Accordingly, we denote the Z-transform of x↑R as
X↑R(z) = ∑ n∈Z x↑R[n]z−n. (C.2)
The following proposition shows thatX↑R(z) is easily computed fromX(z).
Proposition C.1 (Z-transform of up-sampled sequence). Given a sequence x and its up-sampled sequence x↑R, their Z-transformsX(z) andX↑R(z) are related by
X↑R(z) =X(zR). (C.3)
Proof of Proposition C.1. The proof makes use of the definition of Z-transform (Equation (B.3)). X↑R(z) = ∑ n∈Z x↑R[n]z−n (C.4)
= ∑ m∈Z x↑R[mR]z−mR (C.5)
= ∑ m∈Z x[m](zR)−m =X(zR), (C.6)
where Equation (C.5) makes a change of variables m = n/R since x↑R[n] = 0,∀n 6= mR.
(2) Down-sampling and polyphase components. Different from the up-sampled sequence, there exist multiple down-sampled sequences, depending on the phase of down-sampling. These sequences are known as the polyphase components. Specifically, given a sequence (of scalars, vectors, or matrices) x = {x[n], n ∈ Z}, its rth polyphase component xr|R = {xr|R[n], n ∈ Z} is defined as
xr|R[n] , x[nR+ r], (C.7)
where R ∈ Z+ is the down-sampling rate. We further denote the Z-transform of xr|R as Xr|R(z) = ∑ n∈Z xr|R[n]z−n, (C.8)
Note that r ∈ Z is an arbitrary integer, which does not necessarily take values from [R]. In fact, we have x(r+kR)|R[n] = xr|R[n+ k] andXr+kR|R(z) = zkXr|R(z). In Proposition C.2, we establish the relation betweenH(z) and {Hr|R(z)}r∈[R], i.e., to representH(z) in terms of {Hr|R(z)}r∈[R]. Proposition C.2 (Polyphase decomposition). Given a sequence x and its polyphase components xr|R’s, the Z-transformX(z) can be represented by {Xr|R(z)}r∈[R] as
X(z) = ∑ r∈[R] Xr|R(zR)z−r. (C.9)
Proof of Proposition C.2. We start withX(z), and try to decompose it into its polyphase components X0|R(z), · · · ,XR−1|R(z).
X(z) = ∑ n∈Z x[n]z−n (C.10)
= ∑ r∈[R] ∑ m∈Z x[mR+ r]z−(mR+r) (C.11) = ∑ r∈[R] (∑ m∈Z x[mR+ r]z−mR ) z−r (C.12) = ∑ r∈[R] (∑ m∈Z xr|R[m](zR)−m ) z−r (C.13)
= ∑ r∈[R] Xr|R(zR)z−r, (C.14)
where Equation (C.11) makes a change of variables n = mR+r, and Equation (C.13) is the definition of polyphase components xr|R[m] = x[mR+ r].
For simplicity, we stack R consecutive polyphase components into polyphase matrices as
X [R](z) = X 0|R(z)
... XR−1|R(z) , X̃ [R](z) = [X−0|R(z); · · · ;X−(R−1)|R(z)] . (C.15) The following proposition extends the Parseval’s theorem in Theorem B.3 and shows that the sequence norm ‖x‖ can also be computed in terms of the polyphase matrixX [R](z) (or X̃ [R](z)).
Proposition C.3 (Parseval’s theorem for polyphase matrices). Given a sequence x, its sequence norm ‖x‖ can be computed byX [R](ejω) (or X̃ [R](ejω)) in the spectral domain as
‖x‖2 = 1 2π ∫ π −π ∥∥∥X [R](ejω)∥∥∥2 dω = 1 2π ∫ π −π ∥∥∥X̃ [R](ejω)∥∥∥2 dω. (C.16) Proof of Proposition C.3. The proof follows the standard Parseval’s theorem in Theorem B.3. We only prove the first part of the proposition (usingX [R](ejω)) as follows.
‖x‖2 = ∑ n∈Z ‖x[n]‖2 (C.17)
= ∑ r∈[R] ∑ m∈Z ‖x[mR+ r]‖2 (C.18)
= ∑ r∈[R] ∑ m∈Z ∥∥∥xr|R[m]∥∥∥2 (C.19) = 1
2π ∑ r∈[R] ∫ π −π ∥∥∥Xr|R(ejω)∥∥∥2 dω (C.20) = 1
2π ∫ π −π ∥∥∥X [R](ejω)∥∥∥2 dω, (C.21) where Equation (C.17) follows the definition of sequence norm, Equation (C.18) changes variables as n = mR + r, Equation (C.19) is the definition of polyphase components, and Equation (C.20) applies Parseval’s theorem to xr|R’s. The second part (using X̃ [R](ejω)) can be proved similarly.
C.2 UNIFYING VARIOUS CONVOLUTIONAL LAYERS IN THE SPECTRAL DOMAIN
In Appendix B, the convolution theorem states that a standard convolutional layer is a matrix-vector product Y (z) =H(z)X(z) in the spectral domain, and the layer is orthogonal if and only ifH(z) is paraunitary (Theorem B.5). However, the canonical convolution theorem does not hold for variants of convolutions, thus enforcing a paraunitaryH(z) may not lead to orthogonal convolution. In this subsection, we address this limitation by showing that various convolutions can be uniformly written as Y (z) = H(z)X(z), where Y (z), H(z), X(z) are some spectral representations of y, h, x. Subsequently, we prove that any of these layers is orthogonal if and only if itsH(z) is paraunitary.
(1) Strided convolutional layers are widely used in neural networks to adjust the feature resolution: a strided convolution layer decreases the resolution by down-sampling after a standard convolution, while a transposed convolution increases the resolution by up-sampling before a standard convolution.
Formally, a strided convolutional layer with stride R (abbrev. as ↓R-strided convolution) computes its output following y[i] = ∑ n∈Z h[n]x[Ri− n]. (C.22) In contrast, a transposed strided convolutional layer with stride R (abbrev. as ↑R-strided convolution) computes its output according to
y[i] = ∑ n∈Z h[n]x↑R[i− n]. (C.23)
Proposition C.4 (Orthogonality of strided convolutional layers). For a ↓R-strided convolution, the spatial convolution in Equation (C.22) leads to the following spectral representation:
Y (z) = H̃ [R](z)X [R](z) (C.24)
And for an ↑R-strided convolution, the spatial convolution is represented in spectral domain as:
Y [R](z) =H [R](z)X(z) (C.25)
Furthermore, a ↓R-strided convolution is orthogonal if and only if H̃ [R](z) is paraunitary, and an ↑R-strided convolution is orthogonal if and only ifH [R][z] is paraunitary.
Proof of Proposition C.4. (1a) ↓R-strided convolutions. We first prove the spectral representation of ↓R-strided convolution in Equation (C.24).
Y (z) = ∑ i∈Z y[i]z−i = ∑ i∈Z (∑ n∈Z h[n]x[Ri− n] ) z−i (C.26)
= ∑ i∈Z ∑ r∈[R] ∑ m∈Z h[mR− r]x[(i−m)R+ r] z−i (C.27) = ∑ r∈[R] (∑ m∈Z h[mR− r]z−m (∑ i x[(i−m)R+ r]z−(i−m) )) (C.28) = ∑ r∈[R] (∑ m∈Z h[mR− r]z−m )(∑ i′∈Z x[i′R+ r]z−i ′ ) (C.29)
= ∑ r∈[R] H−r|R(z)Xr|R(z), (C.30)
where Equation (C.26) follows from the definitions of the ↓R-strided convolution (Equation (C.22)) and the Z-transform (Equation (B.3)), Equation (C.27) makes a change of variables n = mR − r, Equation (C.29) further changes i′ = i−m, and Equation (C.30) is due to the definition of polyphase components (Equation (C.7)). Now We rewrite the last equation concisely as
Y (z) = [ H−0|R(z); · · · ;H−(R−1)(z) ]︸ ︷︷ ︸ H̃ [R](z)
X 0|R(z)
... XR−1|R(z) ︸ ︷︷ ︸ X [R](z) , (C.31)
which is the spectral representation of ↓R-strided convolutions in Equation (C.24). Now we prove the orthogonality condition for ↓R-strided convolutions.
‖y‖2 = 1 2π ∫ π −π ∥∥Y (ejω)∥∥2 dω (C.32) = 1
2π ∫ π −π ∥∥∥H̃ [R](ejω)X [R](ejω)∥∥∥2 dω (C.33) = 1
2π ∫ π −π ∥∥∥X [R](ejω)∥∥∥2 dω (C.34) = ‖x‖2 , (C.35)
where Equations (C.32) and (C.35) are due to Parseval’s theorems (Theorem B.3 and Proposition C.3), Equation (C.33) follows from the spectral representation of the ↓ R-strided convolution (Equation (C.24)), and Equation (C.34) utilizes that the transfer matrix is unitary at each frequency. The “only if” part can be proved by contradiction similar to Theorem B.5.
(1b) ↑R-strided convolutions. According to Proposition C.1, the Z-transform of x↑R is X(zR). Therefore, an application of the convolution theorem (Theorem 2.1) on Equation (C.23) leads us to
Y (z) =H(z)X(zR) (C.36) Expanding Y (z) andH(z) using polyphase decomposition (Proposition C.2), we have∑
r∈[R]
Y r|R(zR)z−r = ∑ r∈[R] Hr|R(zR)z−r X(zR) (C.37) ∑ r∈[R] Y r|R(zR)z−r = ∑ r∈[R] ( Hr|R(zR)X(zR) ) z−r (C.38)
Y r|R(zR) =Hr|R(zR)X(zR), ∀r ∈ [R] (C.39) Y r|R(z) =Hr|R(z)X(z), ∀r ∈ [R], (C.40)
where Equation (C.39) is due to the uniqueness of Z-transform, and Equation (C.40) changes the variables from zR to z. Again„we can rewrite the last equation in concisely as Y 0|R(z) Y 1|R(z)
... Y R−1|R(z) ︸ ︷︷ ︸ Y [R](z) = H0|R(z) H1|R(z) ... HR−1|R(z) ︸ ︷︷ ︸ H [R](z) X(z), (C.41)
which is the spectral representation of ↑R-strided convolutions in Equation (C.25). Lastly, we prove the orthogonality condition for ↑R-strided convolutions.
‖y‖2 = 1 2π ∫ π −π ∥∥∥Y [R](ejω)∥∥∥2 dω (C.42) = 1
2π ∫ π −π ∥∥∥H [R](ejω)X(ejω)∥∥∥2 dω (C.43) = 1
2π ∫ π −π ∥∥X(ejω)∥∥2 dω (C.44) = ‖x‖2 , (C.45)
where Equations (C.42) and (C.45) are due to Parseval’s theorems (Theorem B.3 and Proposition C.3), Equation (C.43) follows from the spectral representation of the ↑ R-strided convolution (Equation (C.25)), and Equation (C.44) uses the fact that the transfer matrix is unitary for each frequency. The “only if” part can be proved by contradiction similar to Theorem B.5.
(2) Dilated convolutional layer is proposed to increase the receptive field of a convolutional layer without extra parameters and computation. The layer up-samples its filter bank before convolution with the input. R-dilated convolutional layer) computes its output with the following equation:
y[i] = ∑ n∈Z h↑R[n]x[i− n] (C.46)
Proposition C.5 (Orthogonality of dilated convolutional layer). For an R-dilated convolution, the spatial convolution in Equation (C.46) leads to a spectral representation as
Y (z) =H(zR)X(z), (C.47)
Furthermore, an R-dilated convolutional layer is orthogonal if and only ifH(zR) is paraunitary.
Proof of Proposition C.5. According to Proposition C.1, the Z-transform of h↑R isH(zR). Therefore, the “if” part follows directly from the convolution theorem. The “only if” part can be proved by constructing a counterexample similar to Theorem B.5.
Notice thatH(zR) is paraunitary if and only ifH(ejω) is unitary for all frequency ω ∈ R, which is the same asXHz being paraunitary. In other words, any filter bank that is orthogonal for a standard convolution is also orthogonal for a dilated convolution and vice versa.
(3) Group convolutional layer is proposed to reduce the parameters and computations and used in many efficient architectures, including MobileNet, ShuffleNet. The layer divides both input/output channels into multiple groups and restricts the connections within each group.
Formally, a group convolutional layer with G groups (abbrev. as G-group convolutions) is parameterized by G filter banks {hg}g∈[G], each consists of (T/G)× (S/G) filters. The layer maps an S channels input x to a T channels output y according to
y[i] = ∑ n∈Z blkdiag ( {hg[n]}g∈[G] ) x[i− n], (C.48)
where blkdiag({·}) computes a block diagonal matrix from a set of matrices.
Proposition C.6 (Orthogonality of group convolutional layer). For a G-group convolution, the spatial convolution in Equation (C.48) leads a spectral representation as , their z-transforms satisfy
Y (z) = blkdiag ( {Hg(z)}g∈[G] ) X(z), (C.49)
Furthermore, a G-group convolutional layer is orthogonal if and only if the block diagonal matrix is paraunitary, i.e., each hg(z) is paraunitary.
Proof of Proposition C.6. Due to the convolution theorem, it suffices to prove that the Z-transform of a sequence of block diagonal matrices is also block diagonal in the spectral domain.
∑ n∈Z
h 0[n]
. . . hG−1[n] ︸ ︷︷ ︸ blkdiag ( {hg[n]}g∈[G] ) z−n = ∑ n∈Z h 0[n]z−n . . . ∑ n∈Z h G−1[n]z−n (C.50)
= H 0(z)
. . . HG−1(z) ︸ ︷︷ ︸ blkdiag ( {Hg(z)}g∈[G] ) . (C.51)
As a result, we can write the orthogonality condition asH 0(z)
. . . hG−1(z)
† H 0(z) . . .
HG−1(z) = H 0(z)†H0(z)
. . . HG−1(z)†HG−1(z) = I, ∀|z| = 1. (C.52) The equation impliesHg(z)†Hg(z) = I,∀|z| = 1,∀g ∈ [G], i.e., eachHg(z) is paraunitary.
C.3 REALIZATIONS OF PARAUNITARY SYSTEMS
In this subsection, we first prove that all finite-length 1D-paraunitary systems can be represented in a factorized form. Next, we show how we can construct MD-paraunitary systems using 1D systems. Lastly, we study the relationship of existing approaches to paraunitary systems.
C.3.1 COMPLETE FACTORIZATION OF 1D-PARAUNITARY SYSTEMS
The classic theorem for spectral factorization of paraunitary systems is traditionally developed for causal systems (Vaidyanathan, 1993; Kautsky & Turcajová, 1994). Given a causal paraunitary system of length L (i.e., polynomial in z−1), there always exists a factorization such that
H(z) = QV (z−1;U (1)) · · ·V (z−1;U (L−1)), (C.53) whereQ is an orthogonal matrix, U (`) is a column-orthogonal matrix, and V (z;U) is defined as
V (z;U) = (I −UU>) +UU>z. (C.54) In Theorem C.7, we extends this theorem from causal systems to finite-length (but non-causal) ones. Theorem C.7 (Complete factorization for 1D-paraunitary systems). Suppose that a paraunitary system H(z) is finite-length, i.e., it can be written as ∑ n h[n]z
−n for some sequence {h[n], n ∈ [−L,L]}, then it can be factorized in the following form:
H(z) = V (z;U (−L)) · · ·V (z;U (−1))QV (z−1;U (1)) · · ·V (z−1;U (L)), (C.55) where Q is an orthogonal matrix, U (`) is a column-orthogonal matrix, and V (z;U) is defined in Equation (C.54). Consequently, the paraunitary system H(z) is parameterized by L + L + 1 (column-)orthogonal matricesQ and U (`)’s.
Proof for Theorem C.7. Given a non-causal paraunitary systemH(z), we can always find a causal counterpart Ĥ(z) such that H(z) = zLĤ(z) (This can be done by shifting the causal system backward by L steps, which is equivalent to multiplying zL in the spectral domain). Since the causal system Ĥ(z) admits a factorization in Equation (C.55), we can write the non-causal systemH(z) as
H(z) = zLQV (z−1; Û (1)) · · ·V (z−1; Û (L+L)). (C.56)
Therefore, it suffices to show that for an orthogonal matrixQ and any column-orthogonal matrix Û , we can always find another column-orthogonal matrix U such that
zQV (z−1; Û) = V (z;U)Q. (C.57)
If the equation above is true, we can set U (`) = Û (`−1−L) for ` < 0 and U (`) = Û (`−L) for ` > 0, which will convert Equation (C.56) into Equation (C.55).
Now we start to prove Equation (C.57). Note that any column-orthogonal Û has a complement Ū such that [Û , Ū ] is orthogonal and I = ÛÛ> + ŪŪ>. We then rewrite Equation (C.57) as
zQV (z−1; Û) = zQ(I − ÛÛ> + ÛÛ>z−1) (C.58) = Q(I − ŪŪ> + ŪŪ>z) (C.59) = (I −QŪŪ>Q> +QŪŪ>Q>z)Q (C.60) = (I −UU> +UU>z)Q (C.61) = V (z;U)Q, (C.62)
where in Equation (C.61) we set U = QŪ . This completes the proof.
C.3.2 MULTI-DIMENSIONAL (MD) PARAUNITARY SYSTEMS
If the data are multi-dimensional (MD), we will need MD-convolutional layers in neural networks. Analogously, we can prove the equivalence between orthogonal MD-convolutions in the spatial domain and MD-paraunitary systems in the spectral domain, i.e.,
H(z)†H(z) = I, z = (z1, · · · , zD), |zd| = 1,∀d ∈ [D], (C.63)
where D is the data dimension. In this work, we adopt a parameterization based on separable systems. Definition C.8 (Separable MD-paraunitary system). A MD-paraunitary systemH(z) is separable if there exists D 1D-paraunitary systemsH1(z1), · · · ,HD(zD) such that
H(z) =H(z1, · · · , zD) ,H1(z1) · · ·HD(zD). (C.64)
Therefore, we can construct an MD-paraunitary system with D number of 1D-paraunitary systems, each of which is represented in Equation (C.55). Notice that not all MD-paraunitary systems are separable, thus the parameterization in Equation (C.64) is not complete (see Section 5 for a discussion). However, we can guarantee that our parameterization realizes all separable MD-paraunitary systems — each separable paraunitary system admits a factorization in Equation (C.64), where each 1D-system admits a factorization in Equation (C.55).
C.3.3 INTERPRETATIONS OF PREVIOUS APPROACHES
In Theorem B.5, we have shown that a paraunitary transfer matrix is both necessary and sufficient for a convolution to be orthogonal. Therefore, we | 1. What is the focus and contribution of the paper on enforcing strict orthogonality of convolutional layers?
2. What are the strengths of the proposed method, particularly in its technical connections with previous methods and its application in deep residual Lipschitz network architectures?
3. Do you have any concerns regarding the paper's claims and overclaiming in the abstract and introduction?
4. How does the reviewer assess the clarity and completeness of the presentation, particularly in Section 2?
5. What are your questions about the paper's approach to orthogonal convolution and its relation to previous methods that do not require separability?
6. How does the reviewer evaluate the usefulness and significance of the phrase "orthogonal networks" in the context of the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a method for enforcing strict orthogonality of convolutional layers, by means of a factorization in the spectral domain. It shows that the technique can be extended to dealing with practical convolutions with strides, dilations and groups. The experiments demonstrated the superior performance in terms of adversarial robustness.
Review
Orthogonality is an important problem in the design of neural network architecture that relates to many fundamental properties of the network such as trainability, generalizability and robustness. While the study of orthogonality in fully connected layers (or convolution layers with 4D kernels treated as 2D matrices) has a long history, it is only until very recent (in the past 2-3 years) that work on orthogonality of convolution layers emerges. This paper provides a solid study in this area by providing a method of enforcing orthogonality in convolutions, revealing its technical connections with previous methods, designing a deep residual Lipschitz network architectures and conducting solid experiments. I find the presentation to be mostly clear and easy to follow, though I feel that there is a tendency of overclaiming the contribution in the abstract & intro, see below.
Complete parameterization of orthogonal convolution. The paper claims that it offers a complete parameterization of orthogonal convolution, but this is not really the case. As stated in Sec. 2, it only offers complete design for separable orthogonal 2D convolutions. This puts the technique in an unfavorable position compared to previous methods that does not require separability (e.g. Trockman & Kolter).
"Orthogonal Networks". The paper frequently use the phrase "orthogonal networks", but it is not clear what that term entails. For example, it is claimed that "Our versatile framework, for the first time, enables the study of architecture designs for deep orthogonal networks", which seems an overclaim since orthogonality in neural networks have already been extensively studied before. In addition, if "orthogonal network" means that the entire network is an orthogonal transformation, then this is kind of a useless network since orthogonality implies linearity (as long as it is surjective). If it means approximately orthogonal then it should consider, in addition to the convolutional layers, the effect of the nonlinear layers - right now there is no discussion of whether the GroupSort that is used as the nonlinear layer is approximately orthogonal or not. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.